Since 2024-10-02, `gpt-4o` is actually the same as `gpt-4o-2024-08-06`.
We previously used `gpt-4o-2024-08-06`, because it was pointing to a
much better (longer context) model. Since they're both the same now,
we'd better stick to the unpinned model and make it easier for future
users to get upgrades.
`gpt-4o` will point to `gpt-4o-2024-08-06` after 2nd of October 2024
anyway. At that time, we can revert to pointing to `gpt-4o`.
The reason `gpt-4o-2024-08-06` was chosen now instead of `gpt-4o`:
- the `max_response_tokens` configuration was set to 16k, which matches
`gpt-4o-2024-08-06`, but is too large for `gpt-4o` (max 4k)
- baibot's own configs for dynamically created agents, as well as static
config examples use `gpt-4o-2024-08-06` and the larger
`max_response_tokens` value
The playbook did not use to define a prompt for statically-defined
agents.
Since prompt variables support landed in v1.1.0
(see 2a5a2d6a4d)
it makes sense to make use of it for a better out-of-the-box experience
(see https://github.com/etkecc/baibot/issues/10).