GPT4Free TypeScript Version
Providing a free OpenAI GPT-4 API!
![Github Github logo](https://guidady.com/wp-content/uploads/2023/04/Github-150x150.png)
GPT4Free TypeScript Version is a project that aims to provide a free GPT-4 API, a powerful natural language processing model developed by OpenAI. The project is based on TypeScript, a programming language that extends JavaScript with static types and other features. The project uses TensorFlow.js, a library for machine learning in JavaScript, to implement the GPT-4 model and run it in the browser or on Node.js.
To get started, A file named .env
is required for this project.
This step is essential for all methods of operation.
http_proxy=http://host:port
rapid_api_key=xxxxxxxxxx
EMAIL_TYPE=temp-email44
DEBUG=0
POOL_SIZE=1
http_proxy
: If you have trouble reaching the desired site, you may need to set up a proxy server.rapid_api_key
: This is a required configuration for using the forefront api. The apikey allows you to receive registration emails. get api key hereEMAIL_TYPE
: temporary email types includetemp-email
temp-email44
tempmail-lol
- temp-email: The maximum number of requests per day is 100. To increase this limit, you need to link a credit card to your account. This service is very reliable!
- temp-email44: limited 100req/days! Stable!
- tempmail-lol: limited to 25request/5min. Not Stable.
DEBUG
: Valid when useforefront
Set it to 1 when running locally. show reverse processPOOL_SIZE
:forefront
concurrency size. Keep set=1 until you run it successfully!!! You can engage in {POOL_SIZE} conversations concurrently. A higher pool size enables more parallel conversations, but it also consumes more RAM.
Run it locally 🖥️
# install module
yarn
# start server
yarn start
Run with docker🐳(Recommended!)
docker run -p 3000:3000 --env-file .env xiangsx/gpt4free-ts:latest
Deploy with docker-compose 🎭
To run the application with docker, you need to create a .env file first. Then, follow the instructions in the “Run with docker” section.
deployment:
docker-compose up --build -d
Request Parameters📝
prompt
: your question. It can be astring
orjsonstr
.- example
jsonstr
:[{"role":"user","content":"你好\n"},{"role":"assistant","content":"你好!有什么我可以帮助你的吗?"},{"role":"user","content":"你是谁"}]
- example
string
:你是谁
- example
model
: defaultgpt3.5-turbo
. model include:gpt4
gpt3.5-turbo
site
: defaultyou
. target site, includeforefront
you
mcbbs
Response Parameters 🔙
Response when chat ends (/ask):
interface ChatResponse {
content: string;
error?: string;
}
Response with stream like(/ask/stream):
event: message
data: {"content":"I"}
event: done
data: {"content":"'m"}
event: error
data: {"error":"some thind wrong"}
Example💡
- request to site you with history
req:
res:
{
"content": "Hi there! How can I assist you today?"
}
model | support | status | active time |
---|---|---|---|
ai.mcbbs.gq | gpt3.5 | after 2023-06-03 | |
forefront.ai | 👍GPT-4/gpt3.5 | after 2023-06-03 | |
aidream | GPT-3.5 | after 2023-05-12 | |
you.com | GPT-3.5 | after 2023-05-12 | |
phind.com | GPT-4 / Internet / good search | ||
bing.com/chat | GPT-4/3.5 | ||
poe.com | GPT-4/3.5 | ||
writesonic.com | GPT-3.5 / Internet | ||
t3nsor.com | GPT-3.5 |
This GitHub repository contains APIs from various sites without their authorization or affiliation. This project is for learning purposes only. This is a personal hobby project. Sites can contact developer to request better security or removal of their site from this repository.
0 Comments