LLM을 집에서 간단하게 실행하기 위해 https://petals.dev/ 을 활용해서BitTorrent-style 과 같이 모델을 자를 것이고from transformers import AutoTokenizerfrom petals import AutoDistributedModelForCausalLM# Choose any model available at https://health.petals.devmodel_name = "meta-llama/Meta-Llama-3-70B" # This one is fine-tuned Llama 3 (70B)# Connect to a distributed network hosting model layerstokenizer = AutoTokenizer.f..