Star

linkCreate Your Own Language Model

This guide will help you to bring your own language model to be used with MTLLM. This is helpful if you have a self-hosted Language Model or you are using a different service that is not currently supported by MTLLM.

IMPORTANT

This assumes that you have a proper understanding on how to inference with your language model. If you are not sure about this, please refer to the documentation of your language model.

linkSteps

  1. Create a new class that inherits from BaseLLM class.

In Python,

my_llm.py
1linkfrom mtllm.llms.base import BaseLLM

2link

3linkclass MyLLM(BaseLLM):

4link def __init__(self, verbose: bool = False, max_tries: int = 10, **kwargs):

5link self.verbose = verbose

6link self.max_tries = max_tries

7link # Your model initialization code here

8link

9link def __infer__(self, meaning_in: str | list[dict], **kwargs) -> str:

10link # Your inference code here

11link # If you are using a Multimodal (VLLM) model, use the list of dict -> openai format input with encoded images

12link # kwargs are the model specific parameters

13link return 'Your response'

In Jaclang,

my_llm.jac
1linkimport:py from mtlm.llms.base, BaseLLM;

2link

3linkclass MyLLM:BaseLLM: {

4link can init(verbose:bool=false, max_tries:int=10, **kwargs: dict) -> None {

5link self.verbose = verbose;

6link self.max_tries = max_tries;

7link # Your model initialization code here

8link }

9link

10link can __infer__(meaning_in:str|list[dict], **kwargs: dict) -> str {

11link # Your inference code here

12link # If you are using a Multimodal (VLLM) model, use the list of dict -> openai format input with encoded images

13link # kwargs are the model specific parameters

14link return 'Your response';

15link }

16link}

  1. Initialize your model with the required parameters.
app.jac
1linkimport:jac from my_llm, MyLLM; # For Jaclang

2linkimport:py from my_llm, MyLLM; # For Python

3link

4linkllm = MyLLM();

linkChanging the Prompting Techniques

You can change the prompting techniques overriding the the following parameters in your class.

my_llm.py
1linkfrom mtllm.llms.base import BaseLLM

2link

3linkclass MyLLM(BaseLLM):

4link MTLLM_SYSTEM_PROMPT = 'Your System Prompt'

5link MTLLM_PROMPT = 'Your Prompt' # Not Recommended to change this

6link MTLLM_METHOD_PROMPTS = {

7link "Normal": 'Your Normal Prompt',

8link "Reason": 'Your Reason Prompt',

9link "Chain-of-Thoughts": 'Your Chain-of-Thought Prompt',

10link "ReAct": 'Your ReAct Prompt',

11link }

12link OUTPUT_FIX_PROMPT = 'Your Output Fix Prompt'

13link OUTPUT_CHECK_PROMPT = 'Your Output Check Prompt'

14link

15link # Rest of the code

Check the API Reference for more information on prompting techniques.

Thats it! You have successfully created your own Language Model to be used with MTLLM.

NOTICE

We are constantly adding new LMs to the library. If you want to add a new LM, please open an issue here.

Create Your Own Language ModelStepsChanging the Prompting Techniques

Home

Quick Startchevron_right
Building Blockschevron_right
Tutorialschevron_right
API Referencechevron_right
Tips and Trickschevron_right

FAQs