Ollama host. ollama -p 11434:11434 --name ollama ollama/ollama.

Feb 25, 2024 · ollama create my-own-model -f Modelfile ollama run my-own-model. It includes futures such as: Multiple conversations 💬; Detech which models are available to use 📋; Auto check if ollama is running ⏰; Able to change the host where ollama is running at 🖥️; Perstistance 📀; Import & Export Chats 🚛 Cheat Sheet. May 8, 2024 – Ollama released a patched version. Or with docker, you can specify a port. The Dockerfile from Ollama repository shows that it runs on host 0. 0. docker compose — dry-run up -d (On path including the compose. However, when I checked using ss -tuln, port 11434 still appears after 127. Feb 27. 23), they’ve made improvements to how Ollama handles multimodal… Oct 5, 2023 · docker run -d --gpus=all -v ollama:/root/. Welcome to my Ollama Chat, this is an interface for the Official ollama CLI to make it easier to chat. Spoiler: Run Ollama and use ngrok to expose your Mac mini to the internet. Explore the Zhihu column for insightful articles and discussions on a wide range of topics. 4 GB 7 hours ago jolie:latest 72c8b2005de1 7. Ollama 运行一个 HTTP 服务器,可以使用代理服务器如 Nginx 暴露。 Dec 19, 2023 · Get Started. Windows Error:pull model manifest return wsarecv: An existing connection was forcibly closed by the remote host. Host and manage packages Security. Still Get up and running with Llama 3, Mistral, Gemma 2, and other large language models. 0 OLLAMA_ORIGINS=* ollama serve. Now, you know how to create a custom model from model hosted in Huggingface with Ollama. Note: While we support self hosted LLMs, you will get significantly better responses with a more powerful model like GPT-4. g. Paste the URL into the browser of your mobile device or May 21, 2024 · When I updated to v0. Required credentials: OLLAMA_HOST: Your Ollama host URL. If this not work try the following: codegemma. Jan 12, 2024 · Ollama provides an interface to self-host and interact with open-source LLMs (Large Language Models) using its binary or container image. Mar 5, 2024 · You've overwritten OLLAMA_HOST so the service serves on port 33020. This family includes three cutting-edge models: wizardlm2:7b: fastest model, comparable performance with 10x larger open-source models. Yanli Liu. embeddings (model = 'llama3', prompt = 'The sky is blue because of rayleigh scattering') Ps ollama. The initial versions of the Ollama Python and JavaScript libraries are now available, making it easy to integrate your Python or JavaScript, or Typescript app with Ollama in a few lines of code. Use Environment Variables. 0, but some hosted web pages want to leverage a local running Ollama. io. Additionally, you will find supplemental materials to further assist you while building with Llama. Make sure to replace with your actual process ID. What is the impact of not solving this? The only way to secure the connection between ollama client and ollama server is with IP filtering. Unless this is specific to running Ollama natively on Windows. Then I change the same local ip inside oterm config. in. Step 2: Flush DNS Cache: Sometimes the DNS cache can cause issues. Please read Ollama Setup Guide if you struggle to get the Ollama connection running. This issue occurs every time I change the IP/port. 167. Open TWO Terminals. This example walks through building a retrieval augmented generation (RAG) application using Ollama and embedding models. export OLLAMA_HOST=localhost:8888 Run the LLM serving should give you the following output. Llama 3 represents a large improvement over Llama 2 and other openly available models: Trained on a dataset seven times larger than Llama 2. docker run -d -v ollama:/root/. 1:5050 . Ollama 默认绑定 127. pull ('llama3') Push ollama. I plugged it in and it turned out to be the missing piece! Feb 25, 2024 · No branches or pull requests. Models For convenience and copy-pastability , here is a table of interesting models you might want to try out. Oct 9, 2023 · $ OLLAMA_HOST="127. BruceMacD mentioned this issue on Mar 15. internal:host-gateway \-e OLLAMA_HOST = host. To delete one, swipe it from left to right. docker. copy ('llama3', 'user/llama3') Delete ollama. Jan 14, 2024 · I too faced the same issue. OS Windows GPU AMD CPU AMD Ollama version 0. You can even use this single-liner command: $ alias ollama='docker run -d -v ollama:/root/. Use the --network=host flag in your docker command to resolve this. Ollama is a robust framework designed for local execution of large language models. How are you managing the ollama service? OLLAMA_HOST is an environment variable that need to be applied to ollama serve. Both libraries include all the features of the Ollama REST API, are familiar in design, and compatible with new and previous versions of Ollama. /ollama -p 11434:11434 -e OLLAMA_ORIGINS="*" --name ollama ollama/ollama. It doesn't look like your distro is using systemd. io/ollama/ollama run tinyllama Why run LLMs locally? I used to have GPT-4 subscription, but it was barely paying for itself. 《Ollama 监听设置与环境变量配置完整笔记》详细介绍了如何配置监听地址、设置环境变量、使用代理服务器以及优化网络性能,以确保Ollama服务 Download Ollama on Linux by changing OLLAMA_HOST in the windows path in 127. Ollama is a local inference framework client that allows one-click deployment of LLMs such as Llama 2, Mistral, Llava, etc. Whether you are using Windows, Linux, or macOS, Ollama supports advanced models like Llama 3, Mistral, and Gemma, offering a user-friendly and efficient solution for developers and researchers Mar 10, 2024 · Step 9 → Access Ollama Web UI Remotely. #3112. Amidst several user experience options available, I really liked Ollama Web UI. For more detailed information, refer to the Ollama Quickstart Docker. Jun 24, 2024 · Ollama promptly investigated and addressed the issue while keeping us updated. May 5, 2024 – Wiz Research reported the issue to Ollama. Now you can run a model like Llama 2 inside the container. Now that the server is running you can use curl commands to make requests. internal docker. Running Ollama [cmd] Ollama communicates via pop-up messages. ollama -p 11434:11434 --name ollama ollama/ollama Run a model. By default it runs on port number of localhost. This command will install both Ollama and Ollama Web UI on your system. 1:11434 (defaut ollama ip on windows) it conflicts with the Ollama server itself or with other ollama related apps, They simply don't work. Configuring on Windows. The server is not responding but able to communicate now. ). Ollama Configuration Global Access Configuration . go:166: couldn't download blob: Get "http Jun 5, 2024 · ollama. Configure Danswer to use Ollama. First, follow these instructions to set up and run a local Ollama instance: Download and install Ollama onto the available supported platforms (including Windows Subsystem for Linux) Fetch available LLM model via ollama pull <name-of-model>. It should show you the help menu —. Let’s run a model and ask Ollama Dec 7, 2023 · On the host machine open admin powershell and type in: netsh interface portproxy add v4tov4 listenport=8080 listenaddress=0. Great! So, you have the tool that could fetch LLMs in your system. Example. A custom client can be created with the following fields: host: The Ollama The first option creates a new chat, and the second one opens the settings screen where you can change how everything works. Jul 4, 2024 · 请问能否将Ollama服务器地址设置为默认http://host. What is the issue? My port 11434 is occupied. On Windows, Ollama inherits your user and system environment variables. Towards Data Science. Here are some models that I’ve used that I recommend for general purposes. 2B7B. 44 Feb 4, 2024 · Ollama helps you get up and running with large language models, locally in very easy and simple steps. Feb 18, 2024 · Ollama comes with the ollama command line tool. It should show the message, "Ollama is running". However, since direct access to Ollama isn’t required but rather through the Ollama's default host uses 127. It provides a user-friendly approach to Dec 15, 2023 · HTTPS_PROXY must be applied to ollama serve since that's the process actually reaching out to get the models. ️ 1. Try changing your DNS server to a public one like Google DNS (8. #282 adds support for 0. 如果 Ollama 作为 macOS 应用程序运行,你需要使用 launchctl 设置环境变量: 对于每个环境变量,调用 launchctl setenv。 Apr 5, 2024 · 这些配置和使用提示为 Ollama 用户提供了灵活性和控制,以最佳方式利用其功能,同时确保操作的安全性和高效性。. . Setup. 1, which is an IPv4 address. llama3; mistral; llama2; Ollama API If you want to integrate Ollama into your own projects, Ollama offers both its own API as well as an OpenAI May 25, 2024 · Running Ollama on AMD GPU. 8. ollama CLI uses 11434 by default so unless you specified to use 33020, it'll use 11434 which isn't open. In the first, start the ollama server. 1 GB 8 days ago starcoder:latest 18be557f0e69 1 Jun 19, 2024 · Ollama is failing to run on GPU instead it uses CPU. We can dry run the yaml file with the below command. push ('user/llama3') Embeddings ollama. 0 then I get Error: llama runner process has terminated: signal: aborted error:Could not initialize Tensile host: No devices found. The problem is when I run ollama from langchain. Copy the URL provided by ngrok (forwarding url), which now hosts your Ollama Web UI application. Open the command line terminal and enter the following commands: launchctl setenv OLLAMA_HOST "0. 20 participants. py with the contents: Apr 16, 2024 · 這時候可以參考 Ollama,相較一般使用 Pytorch 或專注在量化/轉換的 llama. 1:11435 ollama serve | Works thanks @pdevine 👍 10 pdevine, c0h1b4, jeffyuan, 623725950, RinLinux, pcade, lenin55, Lefti90, mattwilliamson, and SKelliher reacted with thumbs up emoji Step 1: Check DNS Settings: Sometimes DNS settings can cause issues. Oct 12, 2023 · Host Ollama using Ngrok. Feb 8, 2024 · Ollama now has built-in compatibility with the OpenAI Chat Completions API, making it possible to use more tooling and applications with Ollama locally. Followed the official installation guide for Ollama, and installed the Gemma model. This guide provides information and resources to help you set up Llama including how to access the model, hosting, how-to and integration guides. . Installing Both Ollama and Ollama Web UI Using Docker Compose. Feb 20, 2024 · Hi there, if you're looking to expose Ollama on the network, make sure to use OLLAMA_HOST=0. 0:8080 ollama serve. Mar 28, 2024 · Deploying LLMs on GKE with Ollama offers a compelling option for users prioritizing customization, flexibility, potential cost savings, and privacy within their LLM solutions. May 5, 2024 – Ollama notified Wiz Research that they committed a fix to GitHub. ollama is an LLM serving platform written in golang. To get started, please create a new account (this initial account serves as an admin for Open WebUI). mxyng closed this as completed on Dec 15, 2023. Please note we are using CPU only, the AI will response slow, if you have GPU, you can follow the instruction to run the docker and using your Apr 18, 2024 · The most capable model. Be sure to Nov 4, 2023 · You can define the address to use for Ollama by setting the environment variable OLLAMA_HOST. The API is documented here. Ollama is pretty sweet, I'm self-hosting it using 3B models on an old X79 server. Steps to Reproduce: I have a newly installed server with the following configurations: Ubuntu 23. Now, by navigating to localhost:8080, you'll find yourself at Open WebUI. Gen AI Configs. Refer to the CPU-Only Pre-requisites and GPU-Accelerated Pre-requisites for instructions as setup Jan 24, 2024 · Setting environment variables is easy, but you are probably doing it wrong when setting how Ollama runs. Refer to Model Configs for how to set the environment variables for your particular deployment. ollama -p 11434:11434 --name ollama ollama/ollama. Don't forget, this is running on your CPU, not the GPU. go:822: total blobs: 0 2023/10/25 19:38:15 images. create Create a model from a Modelfile. Dify supports integrating LLM and Text Embedding capabilities of large language models deployed with Ollama. However you're starting the service or running the command, that variable needs to be available to the process. I'm aiming to allow external requests to reach the server and enable HTTPS support for the Ollama service. Below that are all the chats. Author. wizardlm2:8x22b: the most advanced model, and the Aug 19, 2023 · I have tried passing in --verbose and any other flags while reading the source code as well as trying to set OLLAMA_HOST="localhost:11434" since maybe I figure it listening on 127. This approach Note: Make sure that the Ollama CLI is running on your host machine, as the Docker container for Ollama GUI needs to communicate with it. 8) or Cloudflare DNS (1. Installed Docker using the command. Mar 14, 2024 · Download Ollama for the OS of your choice. Step 1: Generate embeddings pip install ollama chromadb Create a file named example. Ollama - Danswer Documentation. Assignees. Tick to use the system Environment Variables on the host system for lookup paths. 1). Nov 15, 2023 · It must have been transient - it worked when I tried it today. 10:8080. For example, Feb 17, 2024 · Ollama sets itself up as a local server on port 11434. 4 GB 3 hours ago llama2:latest 7da22eda89ac 3. Let’s install and Apr 2, 2024 · This article will guide you through downloading and using Ollama, a powerful tool for interacting with open-source large language models (LLMs) on your local machine. Resources. OLLAMA_HOST=127. Simply run the following command: docker compose up -d --build. In this project, the intention is to run Ollama in a Docker container and connect it to Chat UI. While command line is good, let’s get ourselves a ChatGPT like user experience. To give everyone using your instance of unSAGED access to self hosted models like Llama 2 and Mistral, set the following environment variables: Nov 13, 2023 · Hi @jjsarf you can use the OLLAMA_HOST environment variable in combination with ollama serve. To integrate a model running on Ollama to TaskingAI, you need to have a valid Ollama service first. ollama -p 11434:11434 --name ollama ollama/ollama:rocm. 1. Double the context length of 8K from Llama 2. to expose Ollama externally on port 8080 you can use: OLLAMA_HOST=0. Feb 15, 2024 · Ollama is now available on Windows in preview, making it possible to pull, run and create large language models in a new native Windows experience. I have also performed the steps given in the docs You can use something like OLLAMA_HOST=127. to expose Ollama externally on port 8080 you can use: Apr 12, 2024 · Bug Summary: WebUI could not connect to Ollama. 0 and port 11434. The same code works on the Ollama server on my Mac, so I guess the issue is not with my Jan 23, 2024 · Ollama WebUI. Jan 8, 2024 · pr= subprocess. After resetting the model path and restarting the service Oct 6, 2023 · Fire up your Command Prompt as admin and run the following to grab the PID of the problematic port: netstat -aon | findstr :11434. If I force it using HSA_OVERRIDE_GFX_VERSION=9. This enables use cases such as: Handling multiple chat sessions at the same time; Hosting a code completion LLM for your internal team; Processing different parts of a document simultaneously; Running several agents at the Jan 13, 2024 · podman exec-it ollama ollama run tinyllama Or alternatively run the CLI interface in a separate container: podman run-it--rm--add-host = host. Here is a non-streaming (that is, not interactive) REST call via Warp with a JSON style payload: The response was: "response": "nThe sky appears blue because of a phenomenon called Rayleigh. To investigate and fix this issue, there's a few possible solutions: Apr 8, 2024 · Ollama also integrates with popular tooling to support embeddings workflows such as LangChain and LlamaIndex. E. Alas, no success. Mar 9, 2024 · 一句话来说, Ollama 是一个基于 Go 语言开发的简单易用的本地大语言模型运行框架。 可以将其类比为 docker(同基于 cobra (opens new window) 包实现命令行交互中的 list,pull,push,run 等命令),事实上它也的确制定了类 docker 的一种模型应用标准,在后边的内容中,你能更加真切体会到这一点。 Oct 20, 2023 · OLLAMA_HOST=127. 1:11435 ollama serve", but my cmd cannot understand. If you're looking for that extra oomph with GPU support, check out the Ollama blog post for Docker image that supports Nvidia GPU May 7, 2024 · Once you have installed Ollama, you should check whether it is running. Ollama commands pull and run are used to download and execute LLMs respectively, just like the Mar 7, 2024 · Now you are ready torun Ollama and download some models :) 3. To get started quickly with the open source LLM Mistral-7b as an example is two commands. Supported Models: Wildcard Model schema id: ollama/wildcard; Quick Start May 11, 2024 · 👋 Hey Tech Enthusiasts! Today, I'm thrilled to share a complete guide on self-hosting the Llama 3 language model using Google Cloud! Whether you're using GC Ollama can now serve multiple requests at the same time, using only a little bit of additional memory for each request. 1:11434 (host. May 5, 2024 – Ollama acknowledged the receipt of the report. ex. Ollama on Windows includes built-in GPU acceleration, access to the full model library, and serves the Ollama API including OpenAI compatibility. Building Local RAG Chatbots Without Coding Using Oct 13, 2023 · I had experimented with Ollama as an easy, out-of-the-box way to run local models in the past, and was pleasantly surprised when I heard there was support for exposing a locally running model to a web app via a shell command. OLLAMA_ORIGINS='*' OLLAMA_HOST=localhost:11434 ollama serve. setting it with ollama run has no effect. 1:4711" ollama list NAME ID SIZE MODIFIED ellie:latest 71f25ef48cab 3. I created a neat terminal AI client that makes requests to it on the local network - called "Jeeves Assistant". Explore a collection of articles and insights on various topics shared by experts and enthusiasts on Zhihu's column platform. delete ('llama3') Pull ollama. I followed the previous method of configuring the service by adding environment variables and restarting the service. In it, you can change the title or tab the sparkle icon to let AI find one for you. cpp 而言,Ollama 可以僅使用一行 command 就完成 LLM 的部署、API Service 的架設達到 Aug 8, 2023 · Currently, Ollama has CORS rules that allow pages hosted on localhost to connect to localhost:11434. If you have a AMD GPU that supports ROCm, you can simple run the rocm version of the Ollama image. Dec 14, 2023 · when i manually kill (to stop ollama) and restart ollama serve. 24. Start by downloading Ollama and pulling a model such as Llama 2 or Mistral: ollama pull llama2 Usage cURL Select whether the script will be executed on the CPU Only or GPU Accelerated (GPU option available when this capability is detected). The possible issue here arises from localhost being bound to an IPv6 address, as configured by the operating system's hosts file. internal:11434,如果可以的话需要在哪里进行修改呢? #543 May 25, 2024 · Introduction to Ollama. If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127. sudo apt-get install -y docker-ce docker-ce-cli containerd. After that's done, just restart ollama: ollama serve. Ollama is a versatile platform that simplifies the process of running large language models (LLMs) locally on your machine. Apr 21, 2024 · Then clicking on “models” on the left side of the modal, then pasting in a name of a model from the Ollama registry. Ollama. 0 connectport=8080 connectaddress=172. May 22, 2024 · Before that, let’s check if the compose yaml file can run appropriately. It makes LLMs built on Llama standards easy to run with an API. Unlike closed-source models like ChatGPT, Ollama offers transparency and customization, making it a valuable resource for developers and enthusiasts. 👍 2. Nov 13, 2023 · jmorganca commented on Nov 13, 2023. To get started, please visit Ollama's website, or follow the simple instructions in the Quick Start. To rename the chat tab and hold it until a popup dialog appears. internal:11434) inside the container . I've taken the following steps: Server Configuration: I configured a reverse proxy using Apache2. Thanks for the help! chnsh closed this as completed on Nov 26, 2023. go:829: total unused blobs removed: 0 2023/10/25 19:38:15 Basic auth does not work with ollama client: How should we solve this? Allow to set basic auth in OLLAMA_HOST or other variables for the client. still having issue with the ollama server. Popen(['ollama', 'run', 'openhermes'], stdout=subprocess. " Oct 12, 2023 · Then, use the following command to download the Ollama image to your computer. Apr 19, 2024 · Open WebUI UI running LLaMA-3 model deployed with Ollama Introduction. Skip to content. Now you should be able to connect to open webui on any computer on your local network using your hosts device IP: ex: 192. 0" launchctl setenv OLLAMA_ORIGINS "*" Restart the Ollama application to apply the settings. Run the Ollama Docker container: sudo docker run -d -v ollama:/root/. Hopefully, the team gets on top of this issue for the beta release of Ollama for Windows. Set them right after watching this video. The "/api/generate" is not functioning and display 404 on the Windows version (not WSL), despite the Ollama server running and "/" being accessible. Feb 17, 2024 · Ollama runs smoothly on Linux in my WSL environment, so I don't think it's a limited GPU memory as per my previous deductions. ollama -p 11434:11434 --name ollama ollama/ollama && docker exec -it ollama ollama run llama2'. wsl:~$ ollama serve 2023/10/25 19:38:15 images. 38, I encountered two issues: OLLAMA_HOST no longer works. Please see the FAQ for more details. Hi @jjsarf you can use the OLLAMA_HOST environment variable in combination with ollama serve. View a list of available models via the model library and pull to use locally with the command Ollama works great with Big-AGI too, look it up on github. ps Custom client. Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama. docker exec -it ollama ollama run llama2 More models can be found on the Ollama library. Closed. Feel free to post another issue! 为您的用户账户编辑或新建 Ollama 的环境变量,比如OLLAMA_HOST、OLLAMA_ORIGINS等。 点击OK/应用保存。 重新运行Ollama。 在 Mac 上设置环境变量. CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks like fill-in-the-middle code completion, code generation, natural language understanding, mathematical reasoning, and instruction following. 30. 1 端口 11434。使用 OLLAMA_HOST 环境变量更改绑定地址。 参考上面的部分如何配置 Ollama 服务器了解如何在您的平台上设置环境变量。 如何在代理服务器下使用 Ollama?¶. For now, I'll keep running on WSL. 1:11434 is the issue. May 3, 2024 · This is typically only needed if you have Ollama running on a different device or if you want to build a more complex setup (for more than your personal use f. Encodes language much more efficiently using a larger token vocabulary with 128K tokens. 1:11435 ollama serve to start ollama serving on port 11435. y and everything works fine, except one thing that happens, when I select CTRL-N and select an AI The initial versions of the Ollama Python and JavaScript libraries are now available, making it easy to integrate your Python or JavaScript, or Typescript app with Ollama in a few lines of code. - ollama/ollama. PIPE, text=True) After running the subprocess "ollama run openhermes" the server start running the model, so the connection client server is working thanks to the OLLAMA_HOST variable. 0:11434 or similar. In the latest release (v0. Once you do that, you run the command ollama to confirm it’s working. If your AMD GPU doesn't support ROCm but if it is strong enough, you can still OLLAMA_HOST=0. 0 ollama serve, ollama list says I do not have any models installed and I need to pull again. docker run -d -v . You can verify whether your port is exposed with netstat -tunl WizardLM-2 is a next generation state-of-the-art large language model with improved performance on complex chat, multilingual, reasoning and agent use cases. /ollama serve. We'll explore how to download Dec 29, 2023 · I was under the impression that ollama stores the models locally however, when I run ollama on a different address with OLLAMA_HOST=0. 8 GB 26 hours ago mistral-openorca:latest 12dc6acc14d0 4. docker run -d --restart always --device /dev/kfd --device /dev/dri -v ollama:/root/. Nov 30, 2023 · Problem: The Ollama service I've installed on Google VM doesn't seem to accept incoming requests over HTTPS. 8 GB 3 hours ago everythinglm:latest bb66cc8d6bfe 7. On this page. If you don't have Ollama installed yet, you can use the provided Docker Compose file for a hassle-free installation. Less than 1 ⁄ 3 of the false “refusals Dec 20, 2023 · Now that Ollama is up and running, execute the following command to run a model: docker exec -it ollama ollama run llama2. 10. yaml Jan 17, 2024 · Ollama is a user-friendly tool and can be operated via terminal or as a REST API. Once Ollama is set up, you can open your cmd (command line) on Windows Nov 17, 2023 · OLLAMA_HOST=0. We can do a quick curl command to check that the API is responding. With the PID in hand, kill the process: taskkill /F /PID <PID>. That's why you needed to call ollama serve in order to pull a model. Try to open port 11434 using command sudo ufw allow tcp/11434 on the ollama hosted machine. Getting started with Meta Llama. You can flush it using the following command in the command prompt: ipconfig /flushdns. Basic auth would be a bit better. So, open a web browser and enter: localhost:11434. That would be most appreciated. I wonder how can I change one? I've tried "OLLAMA_HOST=127. Then from there do a reverse proxy to forward 11434 to https via 443 or standard ssl. 8 GB 8 days ago llama2-uncensored:latest ff4791cdfa68 3. Managing LLMs using Ollama is like managing container lifecycle using container engines like docker or podman. Join Ollama’s Discord to chat with other community members, maintainers, and contributors. May 29, 2024 · Since everything is done locally on the machine it is important to use the network_mode: "host" so Open WebUI can see OLLAMA. PIPE, stderr=subprocess. Give a try and good luck with it. 168. Enter ollama in a PowerShell terminal (or DOS terminal), to see what you can do with it: ollama. Let me know if this doesn't help! Let me know if this doesn't help! 👍 1 chyld reacted with thumbs up emoji Oct 7, 2023 · Dear Maintainers, Thank you very much for creating this project! I need to set up ollama on Linux behind a proxy, and when pulling I get an error: download. 0 OLLAMA_ORIGINS=* Configuring on MacOS. yp lw wo vo gi ss jq hk bi yi