Create a python file named install-llama-3.1–8b.py file with following code:
from huggingface_hub import login
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch# Login to Hugging Face
access_token_read = ""
login(token=access_token_read)
# Mannequin ID
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
# Load mannequin (easier model, no quantization)
mannequin = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16 # Use bfloat16 or float16 if supported
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Create textual content technology pipeline
text_gen = pipeline(
"text-generation",
mannequin=mannequin,
tokenizer=tokenizer,
pad_token_id=tokenizer.eos_token_id
)
# Take a look at the pipeline
response = text_gen("what's the capital of France", max_new_tokens=100)
print(response[0]['generated_text'])
Log in to your Hugging Face account and generate an access token here with consumer and repository learn permissions.
Run the script:
python install-llama-3.1-8b.py
Upon profitable execution, the script will:
- Obtain the mannequin from hugging face repository into native cache (/Customers/
/.cache). Subsequent run onwards the mannequin can be loaded from the native cache. - Ship a immediate to the mannequin and show the response
On this information, you’ve realized tips on how to arrange and run the Meta-LLaMA 3.1 8B Instruct mannequin regionally on a macOS machine utilizing Hugging Face Transformers, PyTorch. Operating LLMs regionally offers you extra management, privateness, and customisation energy.
If you happen to’ve adopted the steps efficiently, it is best to now be capable to:
- Load and run LLaMA 3.1 utilizing a easy Python script
- Deal with massive fashions effectively with quantization
- Generate textual content responses utilizing instruct-tuned prompts
Subsequent Steps
- Construct a chatbot or command-line assistant utilizing this mannequin
- Discover immediate engineering to optimize outcomes
- Experiment with multi-turn conversations