show code js

2024年4月16日 星期二

AnythingLLM with Ollama

1.Install Ollama First

2.chroma install use docker

  • PS>> docker pull ghcr.io/chroma-core/chroma:0.4.24
  • PS>> docker run -p 8000:8000 ghcr.io/chroma-core/chroma:0.4.24

3.anythingllm install use docker

  • create directory C:/anythingllm/storage and file C:/anythingllm/env.txt
  • PS>> docker run -d -p 3001:3001 --cap-add SYS_ADMIN -v C:/anythingllm/storage:/app/server/storage -v C:/anythingllm/env.txt:/app/server/.env -e STORAGE_DIR="/app/server/storage" --add-host=host.docker.internal:host-gateway --name anythingllm mintplexlabs/anythingllm
  • Open localhost:3001, and config llm setting to use http://host.docker.internal:11434

2024年3月28日 星期四

LLama-Factory

 For Windows


  • Download and Unzip file from https://github.com/hiyouga/LLaMA-Factory and Unzip
  • Install rustup‑init.exe from https://rustup.rs/
  • Use Anconda create a env and activate
    • Install Anconda ( https://www.anaconda.com/download )
    • Install Python ( https://www.python.org/downloads/ )
    • Install Pytorch ( https://pytorch.org/get-started/locally/ , just use conda to install 12.x )
      • # Test CUDA with GPU
      • python
      • >> import torch
      • >> torch.cuda.is_available()
      • true
  • cd llama-factory directory
    • pip install -r requirements.txt
    • pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl
    • python .\src\train_web.py
  • Open localhost:7860 and Train your LLM

2024年3月2日 星期六

Use Ollama with Chatbot-Ollama to use local LLM files.

Just go

  • Download and Install ollama from ollama.com or ollama.ai
    • Open http://127.0.0.1:11434/ to show [ollama is running]
    • Use PowerShell to use ollama
      command: ollama list , to list all model
      command: ollama rm modelname , to del model
  • Install docker manager and use it to install chatbox-ollama
    • Open http://127.0.0.1:3000/
  • Use already download model files
    • Create a Modelfile as your_modelname, and config LLM path like below
      FROM c:\path\your_modelname.gguf
      SYSTEM ""
    • Open powershell and goto your_modelname file of directory, run command like below
      ollama create your_modelname -f ./your_modelname
      ollama run your_modelname(>> /bye to exit)
  • Open http://127.0.0.1:3000/zh , chat with llm

2023年11月9日 星期四

Stable Diffusion for Win & Anaconda

 Install Env AI

  • Clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
    • Download model https://civitai.com/ and move to stable-diffusion-webui\models\Stable-diffusion
  • Download v1-5-pruned-emaonly.ckpt from https://huggingface.co/runwayml/stable-diffusion-v1-5 (use less vram), and move to stable-diffusion-webui\models\Stable-diffusion
  • Settings->Face restoration click GFPGAN
  • Run webui-user.bat
  • Open http://127.0.0.1:7860

Env for AI

 Install Python https://www.python.org/downloads/ (allow use admin and add path)

Install Anaconda https://www.anaconda.com/download

Install pytorch https://pytorch.org/get-started/locally/ find cuda version and dl pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Install Git, Lfs

2023年10月21日 星期六

gpt4all-j and use streamlit

 modify on 2023/10/26

git clone https://github.com/ajvikram/streamlit-gpt.git

cd streamlit-gpt 

create model directory and  download https://gpt4all.io/models/ggml-gpt4all-j.bin in this directory

back to streamlit-gpt directory


run adconda

conda create -n gptj python=='3.11.5'

conda activate -n gptj


pip install -r requirement.txt

streamlit run main.py --server.port=8080 --server.address=0.0.0.0<-your ip