Skip to content

Commit 07b3504

Browse files
committedJul 17, 2023
v4.0.0-beta.5
1 parent be1e275 commit 07b3504

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+492
-14003
lines changed
 

‎.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
node_modules
22
yarn-error.log
3+
codegen.log
34
dist
45
/*.tgz

‎README.md

+30-25
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,15 @@ const openai = new OpenAI({
2929
});
3030

3131
async function main() {
32-
const completion = await openai.completions.create({
33-
model: 'text-davinci-002',
34-
prompt: 'Say this is a test',
35-
max_tokens: 6,
36-
temperature: 0,
32+
const completion = await openai.chat.completions.create({
33+
messages: [{ role: 'user', content: 'Say this is a test' }],
34+
model: 'gpt-3.5-turbo',
3735
});
3836

3937
console.log(completion.choices);
4038
}
41-
main().catch(console.error);
39+
40+
main();
4241
```
4342

4443
## Streaming Responses
@@ -48,16 +47,20 @@ We provide support for streaming responses using Server Side Events (SSE).
4847
```ts
4948
import OpenAI from 'openai';
5049

51-
const client = new OpenAI();
50+
const openai = new OpenAI();
5251

53-
const stream = await client.completions.create({
54-
prompt: 'Say this is a test',
55-
model: 'text-davinci-003',
56-
stream: true,
57-
});
58-
for await (const part of stream) {
59-
process.stdout.write(part.choices[0]?.text || '');
52+
async function main() {
53+
const completion = await openai.chat.completions.create({
54+
model: 'gpt-4',
55+
messages: [{ role: 'user', content: 'Say this is a test' }],
56+
stream: true,
57+
});
58+
for await (const part of stream) {
59+
process.stdout.write(part.choices[0]?.text || '');
60+
}
6061
}
62+
63+
main();
6164
```
6265

6366
If you need to cancel a stream, you can `break` from the loop
@@ -76,15 +79,14 @@ const openai = new OpenAI({
7679
});
7780

7881
async function main() {
79-
const params: OpenAI.CompletionCreateParams = {
80-
model: 'text-davinci-002',
81-
prompt: 'Say this is a test',
82-
max_tokens: 6,
83-
temperature: 0,
82+
const params: OpenAI.Chat.CompletionCreateParams = {
83+
messages: [{ role: 'user', content: 'Say this is a test' }],
84+
model: 'gpt-3.5-turbo',
8485
};
85-
const completion: OpenAI.Completion = await openai.completions.create(params);
86+
const completion: OpenAI.Chat.ChatCompletion = await openai.chat.completions.create(params);
8687
}
87-
main().catch(console.error);
88+
89+
main();
8890
```
8991

9092
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
@@ -141,10 +143,13 @@ async function main() {
141143
console.log(err.name); // BadRequestError
142144

143145
console.log(err.headers); // {server: 'nginx', ...}
146+
} else {
147+
throw err;
144148
}
145149
});
146150
}
147-
main().catch(console.error);
151+
152+
main();
148153
```
149154

150155
Error codes are as followed:
@@ -176,7 +181,7 @@ const openai = new OpenAI({
176181
});
177182

178183
// Or, configure per-request:
179-
openai.embeddings.create({ model: 'text-similarity-babbage-001',input: 'The food was delicious and the waiter...' }, {
184+
await openai.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in Node.js?' }], model: 'gpt-3.5-turbo' }, {
180185
maxRetries: 5,
181186
});
182187
```
@@ -193,7 +198,7 @@ const openai = new OpenAI({
193198
});
194199

195200
// Override per-request:
196-
openai.edits.create({ model: 'text-davinci-edit-001',input: 'What day of the wek is it?',instruction: 'Fix the spelling mistakes' }, {
201+
await openai.chat.completions.create({ messages: [{ role: 'user', content: 'How can I list all files in a directory using Python?' }], model: 'gpt-3.5-turbo' }, {
197202
timeout: 5 * 1000,
198203
});
199204
```
@@ -219,7 +224,7 @@ const openai = new OpenAI({
219224
});
220225

221226
// Override per-request:
222-
openai.models.list({
227+
await openai.models.list({
223228
baseURL: 'http://localhost:8080/test-api',
224229
httpAgent: new http.Agent({ keepAlive: false }),
225230
})

0 commit comments

Comments
 (0)
Please sign in to comment.