Star

linkLanguage Models

Language models is the most important building block of MTLLM. Without it we can't achieve neuro-symbolic programming.

Let's first make sure you can set up your language model. MTLLM support clients for many remote and local LMs. You can even create your own as well very easily if you want to.

linkSetting up a LM client

In this section, we will go through the process of setting up a OpenAI's GPT-4o language model client. For that first makesure that you have installed the necessary dependancies by running pip install mtllm[openai].

1linkimport:py from mtllm.llms.openai, OpenAI;

2link

3linkmy_llm = OpenAI(model_name="gpt-4o");

Makesure to set the OPENAI_API_KEY environment variable with your OpenAI API key.

linkDirectly calling the LM

You can directly call the LM by giving the raw prompts as well.

1linkmy_llm("What is the capital of France?");

You can also pass the max_tokens, temperature and other parameters to the LM.

1linkmy_llm("What is the capital of France?", max_tokens=10, temperature=0.5);

linkUsing the LM with MTLLM

Intented use of MTLLM's LMs is to use them with the jaclang's BY_LLM Feature.

linkWith Abilities and Methods

1linkcan function(arg1: str, arg2: str) -> str by llm();

linkWith Classes

1linknew_object = MyClass(arg1: str by llm());

linkYou can parse following attributes to the by llm() feature:

linkEnabling Verbose Mode

You can enable the verbose mode to see the internal workings of the LM.

1linkimport:py from mtllm.llms, OpenAI;

2link

3linkmy_llm = OpenAI(model_name="gpt-4o", verbose=True);

linkRemote LMs

These language models are provided as managed services. To access them, simply sign up and obtain an API key. Before calling any of the remote language models listed below.

NOTICE

make sure to set the corresponding environment variable with your API key. Use Chat models for better performance.

1linkllm = mtllm.llms.{provider_listed_below}(model_name="your model", verbose=True/False);

  1. OpenAI - OpenAI's gpt-3.5-turbo, gpt-4, gpt-4-turbo, gpt-4o model zoo
  2. Anthropic - Anthropic's Claude 3 & Claude 3.5 - Haiku ,Sonnet, Opus model zoo
  3. Groq - Groq's Fast Inference Models model zoo
  4. Together - Together's hosted OpenSource Models model zoo

linkLocal LMs

linkOllama

Initiate a ollama server by following this tutorial here. Then you can use it as follows:

1linkimport:py from mtllm.llms.ollama, Ollama;

2link

3linkllm = Ollama(host="ip:port of the ollama server", model_name="llama3", verbose=True/False);

linkHuggingFace

You can use any of the HuggingFace's language models as well. models

1linkimport:py from mtllm.llms.huggingface, HuggingFace;

2link

3linkllm = HuggingFace(model_name="microsoft/Phi-3-mini-4k-instruct", verbose=True/False);

NOTICE

We are constantly adding new LMs to the library. If you want to add a new LM, please open an issue here.

Language ModelsSetting up a LM clientDirectly calling the LMUsing the LM with MTLLMWith Abilities and MethodsWith ClassesYou can parse following attributes to the by llm() feature:Enabling Verbose ModeRemote LMsLocal LMsOllamaHuggingFace

Home

Quick Startchevron_right
Building Blockschevron_right
Tutorialschevron_right
API Referencechevron_right
Tips and Trickschevron_right

FAQs