mirror of
				https://github.com/spantaleev/matrix-docker-ansible-deploy.git
				synced 2025-11-04 09:08:56 +01:00 
			
		
		
		
	Adjust baibot's openai-config.yml.j2 to avoid max_response_tokens if unspecified
				
					
				
			Reasoning models like `o1` and `o3` and their `-mini` variants
report errors if we try to configure `max_response_tokens` (which
ultimately influences the `max_tokens` field in the API request):
> invalid_request_error: Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. (param: max_tokens) (code: unsupported_parameter)
`max_completion_tokens` is not yet supported by baibot, so the best we
can do is at least get rid of `max_response_tokens` (`max_tokens`).
Ref: db9422740c
			
			
This commit is contained in:
		@@ -8,7 +8,9 @@ text_generation:
 | 
			
		||||
  model_id: {{ matrix_bot_baibot_config_agents_static_definitions_openai_config_text_generation_model_id | to_json }}
 | 
			
		||||
  prompt: {{ matrix_bot_baibot_config_agents_static_definitions_openai_config_text_generation_prompt | to_json }}
 | 
			
		||||
  temperature: {{ matrix_bot_baibot_config_agents_static_definitions_openai_config_text_generation_temperature | to_json }}
 | 
			
		||||
  {% if matrix_bot_baibot_config_agents_static_definitions_openai_config_text_generation_max_response_tokens %}
 | 
			
		||||
  max_response_tokens: {{ matrix_bot_baibot_config_agents_static_definitions_openai_config_text_generation_max_response_tokens | int | to_json }}
 | 
			
		||||
  {% endif %}
 | 
			
		||||
  max_context_tokens: {{ matrix_bot_baibot_config_agents_static_definitions_openai_config_text_generation_max_context_tokens | int | to_json }}
 | 
			
		||||
{% endif %}
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user