Introduction
We can pass multiple functions to open AI. Based on the input chat message, Open AI will do the following
- Choose which function/ functions to execute
- Pass the parameter based on the chat message
- Make sure that the data type is correct for the function
The actual function will be executed in the user’s computer or server, not Open AI completion API. The models which support the function calling are
1. gpt-3.5-turbo-0125
2. gpt-4-turbo-preview
How we used to call functions before?
Before the function calling feature, we used to pass the message to the chat GPT API and asked to extract the information in JSON format from the input message. The problem with this approach is that sometimes it will not provide the right JSON and correct data format. So we have to create the agent to try multiple times. With the help of function calling, it will improve the accuracy of the system
Use case
To explain the function calling, let’s take one example. Assume we are building a question-answer chatbot for school students. Students can ask multiple questions to the chatbot. The two kinds of questions they can ask are
- They can apply for the leave by stating the reason. For eg – “I am not well today. I want to apply for leave today”. So we have to extract the information from the chat message to call the below-mentioned function
def apply_for_leave(number_of_days, reason, type_of_leave):
"""
A function which calls the internal API to apply for leave
:param number_of_days: Number of days required
:param reason: Reason for leave
:param type_of_leave: Sick or Holiday leave
:return: Returns status for the leave
"""
# Call the internal API for apply for the leave
return json.dumps({"days": number_of_days, "reason": reason, "type": type_of_leave, "status": "approved"})
2. They can view the marks by providing their student ID. It will execute the below-mentioned function
def get_my_marks(registration_number):
"""
Returns the marks for the given registration number
:param registration_number: Registration id of the student
:return: Returns the marks
"""
# Call the internal API to get the marks
return json.dumps({"registration_number": registration_number, "marks": {"CS": 90, "English": 89}})
Let’s see how we can select and execute the above-mentioned functions using open AI fiction calling,
Steps of function calling
- Define the functions and their parameter in the following format
- Call the open AI model with the above parameters
- The model will choose which function/s to execute with the JSON input parameters
- Parse the JSON input, and make sure that it contains the right functions and parameters.
- Execute the function with the required parameters
- Pass the output to the open AI API to generate the final output from the function return value
def run_conversation(message):
# Step 1
messages = [{"role": "user", "content": message}]
tools = [
{
"type": "function",
"function": {
"name": "apply_for_leave",
"description": "A function which calls the internal API to apply for leave ",
"parameters": {
"type": "object",
"properties": {
"number_of_days": {
"type": "integer",
"description": "Number of days required",
},
"reason": {
"type": "string",
"description": "Reason for leave",
},
"type_of_leave": {"type": "string", "enum": ["Sick Leave", "Holiday Leave"]},
},
"required": ["number_of_days", "reason", "type_of_leave"],
},
},
},
{
"type": "function",
"function": {
"name": "get_my_marks",
"description": "Returns the marks for the given registration number",
"parameters": {
"type": "object",
"properties": {
"registration_number": {
"type": "integer",
"description": "Registration id of the student",
}
},
"required": ["registration_number"],
},
},
}
]
Step 2
response = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=messages,
tools=tools,
tool_choice="auto", # auto is default, but we'll be explicit
)
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
# Step 3
if tool_calls:
# Note: the JSON response may not always be valid; be sure to handle errors
Step 4
available_functions = {
"apply_for_leave": apply_for_leave,
"get_my_marks": get_my_marks,
} # only one function in this example, but you can have multiple
messages.append(response_message) # extend conversation with assistant's reply
# Step 5
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
function_args = json.loads(tool_call.function.arguments)
if function_name == "apply_for_leave":
function_response = function_to_call(
number_of_days=function_args.get("number_of_days"),
reason=function_args.get("reason"),
type_of_leave=function_args.get("type_of_leave")
)
elif function_name == "get_my_marks":
function_response = function_to_call(
registration_number=function_args.get("registration_number"),
)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
) # extend conversation with function response
second_response = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=messages,
) # get a new response from the model where it can see the function response
return second_response
Output
Conclusion
With the help of function calling we can choose and call the Python functions without any kind of error. It will improve the accuracy of the system. Under the hood, it is just passing the function Skelton to the chat GPT APIs to generate the JSON which selects and executes the function. It will be an expensive solution if you have a large number of functions and lots of parameters to extract from the chat message. It will also help us to build the state machines using the LangGraph which we will discuss in the upcoming articles.
You can download the full code from this git repo. Feel free to comment if you have any questions