2

I'm trying to connect OpenAI API to my Vue.js project. Everything is OK but every time I try to POST request, I get a 429 status code (too many request) but I didn't even had the chance to make one. Any help?

Response:

{
    "message": "Request failed with status code 429",
    "name": "Error",
    "stack": "Error: Request failed with status code 429\n    at createError (C:\\Users\\sim\\Documents\\SC\\server\\node_modules\\axios\\lib\\core\\createError.js:16:15)\n    at settle (C:\\Users\\sim\\Documents\\SC\\server\\node_modules\\axios\\lib\\core\\settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (C:\\Users\\sim\\Documents\\SC\\server\\node_modules\\axios\\lib\\adapters\\http.js:322:11)\n    at IncomingMessage.emit (events.js:412:35)\n    at endReadableNT (internal/streams/readable.js:1333:12)\n    at processTicksAndRejections (internal/process/task_queues.js:82:21)",
    "config": {
        "transitional": {
            "silentJSONParsing": true,
            "forcedJSONParsing": true,
            "clarifyTimeoutError": false
        },
        "transformRequest": [
            null
        ],
        "transformResponse": [
            null
        ],
        "timeout": 0,
        "xsrfCookieName": "XSRF-TOKEN",
        "xsrfHeaderName": "X-XSRF-TOKEN",
        "maxContentLength": -1,
        "maxBodyLength": -1,
        "headers": {
            "Accept": "application/json, text/plain, */*",
            "Content-Type": "application/json",
            "User-Agent": "OpenAI/NodeJS/3.1.0",
            "Authorization": "Bearer secret",
            "Content-Length": 137
        },
        "method": "post",
        "data": "{\"model\":\"text-davinci-003\",\"prompt\":\"option-2\",\"temperature\":0,\"max_tokens\":3000,\"top_p\":1,\"frequency_penalty\":0.5,\"presence_penalty\":0}",
        "url": "https://api.openai.com/v1/completions"
    },
    "status": 429
}

My method in Vue.js:

async handleSelect() {
      try {
        const res = await fetch("http://localhost:8000/", {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          },
          body: JSON.stringify({
            question: this.selectedOption,
          })
        })

        const data = await res.json();
        console.log(data);
      } catch {
        console.log(data);
      }
    }

on server side

app.post("/", async (req, res) => {
  try {
    const question = req.body.question;

    const response = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: `${question}`,
      temperature: 0, // Higher values means the model will take more risks.
      max_tokens: 3000, // The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
      top_p: 1, // alternative to sampling with temperature, called nucleus sampling
      frequency_penalty: 0.5, // Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
      presence_penalty: 0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
    });
    // console.log(response);
    res.status(200).send({
      bot: response.data.choices[0].text,
    });
  } catch (error) {
    // console.error(error);
    res.status(500).send(error || "Something went wrong");
  }
});
3
  • OpenAI requests are rate-limited by organization.. Are you passing an API key to identify your org? Commented Jan 24, 2023 at 18:18
  • someone else using your api key it sounds like Commented Feb 18, 2023 at 2:37
  • me too, got the error 429, as soon as i use the sample code from chat-gpt and for the very first try it says i've already exceed the limit (!?), while i can still chat via their webpage, that's confusing. Commented Mar 8, 2023 at 7:43

2 Answers 2

2

As stated in the official OpenAI article:

This (i.e., 429) error message indicates that you have hit your assigned rate limit for the API. This means that you have submitted too many tokens or requests in a short period of time and have exceeded the number of requests allowed. This could happen for several reasons, such as:

  • You are using a loop or a script that makes frequent or concurrent requests.

  • You are sharing your API key with other users or applications.

  • You are using a free plan that has a low rate limit.

Working example

Frontend

HelloWorld.vue

<template>
  <div class="hello"></div>

  <select v-model="selected" @change="handleSelect()">
    <option disabled value="">Please select one</option>
    <option>Say this is a test</option>
    <option>Say nothing</option>
  </select>

  <div class="container-selected">Selected: {{ selected }}</div>

  <div class="container-data" v-if="showData">{{ showData.bot }}</div>
</template>

<script>
export default {
  data: function () {
    return {
      selected: "",
      showData: "",
    };
  },
  methods: {
    async handleSelect(data) {
      try {
        const res = await fetch("http://localhost:3000/", {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          },
          body: JSON.stringify({
            question: this.selected,
          }),
        });

        const data = await res.json();
        this.showData = data;
        console.log(data);
      } catch {
        console.log(data);
      }
    },
  },
};
</script>

<style lang="scss">
.container-selected {
  margin-top: 12px;
  font-size: 20px;
}

.container-data {
  margin-top: 24px;
  font-size: 20px;
}
</style>

package.json

{
  "name": "openai",
  "version": "0.1.0",
  "private": true,
  "scripts": {
    "serve": "vue-cli-service serve",
    "build": "vue-cli-service build",
    "lint": "vue-cli-service lint"
  },
  "dependencies": {
    "register-service-worker": "^1.7.2",
    "vue": "^3.2.13",
    "vue-class-component": "^8.0.0-0",
    "vue-router": "^4.0.3",
    "vuex": "^4.0.0"
  },
  "devDependencies": {
    "@typescript-eslint/eslint-plugin": "^5.4.0",
    "@typescript-eslint/parser": "^5.4.0",
    "@vue/cli-plugin-eslint": "~5.0.0",
    "@vue/cli-plugin-pwa": "~5.0.0",
    "@vue/cli-plugin-router": "~5.0.0",
    "@vue/cli-plugin-typescript": "~5.0.0",
    "@vue/cli-plugin-vuex": "~5.0.0",
    "@vue/cli-service": "~5.0.0",
    "@vue/eslint-config-typescript": "^9.1.0",
    "eslint": "^7.32.0",
    "eslint-config-prettier": "^8.3.0",
    "eslint-plugin-prettier": "^4.0.0",
    "eslint-plugin-vue": "^8.0.3",
    "prettier": "^2.4.1",
    "sass": "^1.32.7",
    "sass-loader": "^12.0.0",
    "typescript": "~4.5.5"
  }
}

Backend

index.js

const express = require('express');
const app = express();
app.use(express.json());

const cors = require('cors');
app.use(cors());

app.post('/', async(req, res) => {
  try {
    const { Configuration, OpenAIApi } = require('openai');
    const configuration = new Configuration({
        apiKey: 'sk-xxxxxxxxxxxxxxxxxxxx'
    });
    const openai = new OpenAIApi(configuration);

    const question = req.body.question;

    await openai.createCompletion({
      model: 'text-davinci-003',
      prompt: question,
      temperature: 0,
      max_tokens: 7
    })
    .then((response) => {
      console.log(response.data.choices[0].text);
      res.status(200).send({ bot: response.data.choices[0].text });
    })
    .catch((err) => {
      res.status(400).send({ message: err.message });
    })
  } catch (error) {
    res.status(500).send(error || 'Something went wrong');
  }
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
  console.log(`Server is running on port ${PORT}.`);
});

package.json

{
  "name": "openai-server",
  "version": "1.0.0",
  "description": "Express server",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "cors": "^2.8.5",
    "express": "^4.18.2",
    "nodemon": "^2.0.20",
    "openai": "^3.1.0"
  }
}

Output

GIF

Sign up to request clarification or add additional context in comments.

2 Comments

I'm seeing the same error with a brand new API key, and a request of less than 400 words. The code I'm using worked a few weeks ago with a different API key.
Is this your first and only OpenAI account? It seems like free credit is given based on a phone number. If you sign up with a different email but the same phone number, you won't get free credit. See this.
0

Solution: add credit.

  1. Go to account settings: https://platform.openai.com/settings
  2. Click 'Billing'
  3. Click on 'Add credit to balance'
  4. Wait 2-3 minutes for it to take effect.

More info here.

I'm not sure why the OpenAI API throws a 429 Too many requests when you don't have sufficient credit on your OpenAI API account, that's a bit confusing as those two things seem unrelated.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.