서비스 개발

[AI] LLAMA 3 - 고성능 GPU 없이 실행하기

무우님 2024. 4. 22. 20:50

LLM을 집에서 간단하게 실행하기 위해 https://petals.dev/ 을 활용해서

BitTorrent-style 과 같이 모델을 자를 것이고

from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM

# Choose any model available at https://health.petals.dev
model_name = "meta-llama/Meta-Llama-3-70B"  # This one is fine-tuned Llama 3 (70B)

# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name)

# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0]))  # A cat sat on a mat...

 

코드는 위와 같다.

 

https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing