Language models is the most important building block of MTLLM. Without it we can't achieve neuro-symbolic programming.
Let's first make sure you can set up your language model. MTLLM support clients for many remote and local LMs. You can even create your own as well very easily if you want to.
In this section, we will go through the process of setting up a OpenAI's GPT-4o
language model client. For that first makesure that you have installed the necessary dependancies by running pip install mtllm[openai]
.
1linkimport:py from mtllm.llms.openai, OpenAI;
2link
3linkmy_llm = OpenAI(model_name="gpt-4o");
Makesure to set the OPENAI_API_KEY
environment variable with your OpenAI API key.
You can directly call the LM by giving the raw prompts as well.
1linkmy_llm("What is the capital of France?");
You can also pass the max_tokens
, temperature
and other parameters to the LM.
1linkmy_llm("What is the capital of France?", max_tokens=10, temperature=0.5);
Intented use of MTLLM's LMs is to use them with the jaclang
's BY_LLM
Feature.
1linkcan function(arg1: str, arg2: str) -> str by llm();
1linknew_object = MyClass(arg1: str by llm());
by llm()
feature:method
(default: Normal
): Reasoning method to use. Can be Normal
, Reason
or Chain-of-Thoughts
.tools
(default: None
): Tools to use. This is a list of abilities to use with ReAct Prompting method.model specific parameters
: You can pass the model specific parameters as well. for example, max_tokens
, temperature
etc.You can enable the verbose mode to see the internal workings of the LM.
1linkimport:py from mtllm.llms, OpenAI;
2link
3linkmy_llm = OpenAI(model_name="gpt-4o", verbose=True);
These language models are provided as managed services. To access them, simply sign up and obtain an API key. Before calling any of the remote language models listed below.
NOTICE
make sure to set the corresponding environment variable with your API key. Use Chat models for better performance.
1linkllm = mtllm.llms.{provider_listed_below}(model_name="your model", verbose=True/False);
OpenAI
- OpenAI's gpt-3.5-turbo, gpt-4, gpt-4-turbo, gpt-4o model zooAnthropic
- Anthropic's Claude 3 & Claude 3.5 - Haiku ,Sonnet, Opus model zooGroq
- Groq's Fast Inference Models model zooTogether
- Together's hosted OpenSource Models model zooInitiate a ollama server by following this tutorial here. Then you can use it as follows:
1linkimport:py from mtllm.llms.ollama, Ollama;
2link
3linkllm = Ollama(host="ip:port of the ollama server", model_name="llama3", verbose=True/False);
You can use any of the HuggingFace's language models as well. models
1linkimport:py from mtllm.llms.huggingface, HuggingFace;
2link
3linkllm = HuggingFace(model_name="microsoft/Phi-3-mini-4k-instruct", verbose=True/False);
NOTICE
We are constantly adding new LMs to the library. If you want to add a new LM, please open an issue here.