Yomi Proxy
This is a public proxy for various AI models. Use the endpoints below with your preferred client.
Grok-4-Fast Openrouter
https://yomi-proxy.onrender.com/openrouter/v1/chat/completions
Deepseek R1 0528
https://yomi-proxy.onrender.com/tooruhost/v1/chat/completions
Deepseek V3 0324 (Lab Hosted)
https://yomi-proxy.onrender.com/toorulab/v1/chat/completions
How to Use This Proxy
1. API Endpoint & Model Name
Use the full API Endpoint provided above in your client's API URL field. For the "Model Name" field in your client, you have two options:
- For Built-in Providers (OpenAI, Gemini, etc.): You can use any model name supported by that provider (e.g.,
gpt-4o
,gemini-pro
). - For Custom Providers: If a "Required Model" is listed, you must use that exact name in your client's model field. Otherwise, you can typically use any name you like, as the proxy will replace it with the correct model ID automatically.
2. Custom Prompts
This proxy allows you to inject a custom, one-time prompt into the structure. This is useful for things like character-specific instructions or memory.
To use it, place a special tag in your first user message (usually along with character persona and scenario information):
<Custom_Prompt>
This is my special instruction for the AI.
It can be multiple lines.
</Custom_Prompt>
The proxy will extract this text. If the administrator has configured a <<CUSTOM_PROMPT>>
placeholder in the prompt structure for this provider, your text will be injected there. If not, this tag will be ignored.
3. Commands
Commands are special tags you can include anywhere in your messages to inject pre-defined blocks of text, such as jailbreaks or persona modifiers.
For example, to use a jailbreak command, you might include the tag <JAILBREAK_V4>
in your message. The proxy will detect this, remove the tag, and inject the corresponding content at the location defined by the administrator.