Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.abundly.ai/llms.txt

Use this file to discover all available pages before exploring further.

Each agent comes with an advanced chat interface where you can talk to your agent, upload files, generate and edit documents, and use voice input. This chat is similar to apps like ChatGPT and Claude. The main difference is that Abundly agents don’t live exclusively in the chat, the chat is just one of many possible communication channels. See Multi-channel Communication for more details.
Chat interface showing a conversation with an agent
Think of each conversation as a local context for the agent, which includes all messages in that conversation, plus the agent’s instructions, documents, and capabilities. If you want any chat content to be available to the agent outside of that specific conversation, you need to ask the agent to save it in its instructions or as an agent document.Diary entries are an exception—when diary writing is enabled, the agent writes these automatically and can read them later, outside the chat.

Conversation context indicator

On desktop, the chat action bar includes a small circular indicator that shows how full the current conversation context is. As the conversation grows, the ring fills and changes color from gray to yellow, orange, and red. Hover the indicator to see the current percentage and guidance. Click it to open a popover with detailed context usage information and links to optimization tips. The indicator is specific to each conversation. Starting a new chat gives you a fresh context window. When a conversation grows very long, a warning bar appears above the chat input suggesting you continue in a new chat. Click New chat with summary and the agent generates a handoff recap of the current conversation — your goal, progress so far, pending items, key decisions, and links to any documents involved — and drops it into a fresh chat so you can pick up where you left off without dragging the full history along.

When to use the chat

The role of the chat depends a lot on what the agent’s job is. During initial setup and tuning, the chat is particularly useful for:
  • Discussing and updating the agent’s instructions “I want you to help me handle incoming invoices, check that they are valid, and then route them to the right team. What do you need from me to do this?”
  • Testing the agent: “Check this incoming order PDF and process it according to the instructions, as if you received it in an email. Don’t interact with any external systems, just tell me what you would have done.”
  • Understanding and improving the agent’s behavior: “You flagged the latest invoice as potentially fraudulent, even though you’ve processed similar invoices before without any issues. Can you analyze your instructions and docs and figure out why? And then suggest how we can improve your instructions to avoid this in the future?”
  • Adding more context: “Since you keep asking me about people’s contact info, here is a link to our contacts page. Update your instructions to use this as a source of information.”
Once the agent is up and running, the role of the chat will vary.
  • Some agents are very chat-centric, where most of the interaction happens in the chat. For example if the job of the agent is to analyze the competitive landscape, you might use the chat as the primary interface for discussing this.
  • Some agents use other channels mostly, such as email or Slack. In that case the built-in chat has a secondary role.
  • Some agents handle a workflow automatically and rarely need to chat at all. For example, once the invoice router is up and running and doing its job, it will work silently in the background for the most part, except for when it needs tuning.

File uploads

You can upload files directly in the chat, using drag drop or by clicking the upload button.
  • Images — The agent can analyze and describe images, extract text, or use them as context
  • Documents — PDFs, Word documents, spreadsheets, and presentations
  • Audio — Recordings that can be transcribed and analyzed
  • Data files — CSV, JSON, and other structured data
The agent processes these files in context and can reference them throughout the conversation.

Working with agent documents

From the chat, you can ask the agent to read, create, or update its agent documents and databases. This makes it easy to build up the agent’s knowledge base through natural conversation.
  • Saving documents: “Save these updated guidelines as an agent doc please”
  • Reading documents: (drop a PDF) “Check if this PDF complies with the guidelines document”
  • Updating databases: (after brainstorming) “Those are great ideas, please add them to the brainstorm database”
  • Creating databases: (paste a whiteboard photo) “Here are some notes from our product planning session, please create a database and store the product ideas, with suitable fields like category, prio, etc”.
When the agent creates visual or rich content—an HTML page, a React app, a chart, a diagram, a formatted report—it appears as a chat document that opens in a side panel next to the chat. Chat documents are scoped to the current conversation and are deleted with the chat. Click the Chat documents button in the chat action bar to browse everything created in the current conversation, and promote anything worth keeping to a permanent agent document.

Rich responses

Agents can respond with more than plain text. They can respond with formatted text, diagrams, images, and voiceovers.
  • Formatted text: “Make a nice-looking draft blog post based on my messy brainstorm notes”
  • Diagrams: “Show me a flowchart of your workflow”
  • Images: “Create a nice-looking infographic of the given meeting notes”
  • Voiceovers - “Email me a voiceover of this article, using a casual female british accent”
  • Mixed formats: “Create a nice-looking draft blog post based on my messy brainstorm notes, include a suitable image and visual overview, and a voiceover link.”
The visualization below was generated by our own Release Noter agent that monitors GitHub and writes release notes. The prompt was: “Create a nice looking visual timeline of the top 5 features released during the past month, as a compact visual. Make it look like a poster, landscape aspect ratio.”
Example of a visual timeline generated by an agent

Interactive applications

Agents can generate interactive applications directly in the chat. This allows agents to create custom interfaces for specific tasks:
  • Dashboards: “Create a nice-looking interactive dashboard for our OKRs, where you can browse progress per team, and aggregate the progress by quarter.”
  • Forms — “Create a user feedback form that saves data to the user feedback database”
See Interactive Apps for more on interactive applications.
Interactive applications are rendered using React components that the agent generates on the fly. The agent can iterate on these based on your feedback.

Code execution

If you enable the Code execution capability, the agent can execute code directly in the chat. This is useful for situations where large amounts of data need to be processed, or where the agent needs to perform complex calculations.
  • “Calculate the total cost of the project, including materials and labor”
  • “Generate a report of the sales data, including charts and tables”
  • “Aggregate the sales data in the database and store in a new database called ‘Sales by product category’”
This allows the number crunching and data processing to be done using code instead of the LLM, which is faster, cheaper, and more reliable. This is especially useful when working with large datasets or complex calculations. See Code Execution and Scripts for more on code execution.

Voice input

You don’t have to type—press the microphone button to speak, and your speech is transcribed to text. This works in most languages, and is often a huge time saver.
You don’t need to think carefully about what to say, just ramble your thoughts freely and the agent will transcribe them to text. It may look messy, but the agent will usually understand what you meant.

Editing sent messages

You can edit a message you already sent if you want to correct it or add missing details. This is useful when a small wording change would otherwise require restarting the discussion. When you send an edited message, the conversation continues from that updated point and messages below it are removed so the agent can respond to the revised context. You can also use voice input while editing, so you can dictate changes instead of typing.

Text-to-speech

If you hover over a chat message, you can click on the “speak” button below to have it read aloud. This is useful for situations such as taking a walk, where you don’t want to look at your phone more than necessary. There is also a walk-and-talk mode that is optimized for this use case. See Voice communication for more on voice features, including walk-and-talk mode.

Multi-user chat

The chat is collaborative—different users in your workspace can talk to the same agent, and multiple people can participate in the same conversation. Responses are live-streamed, so if several users are watching the same chat, they all see the response as it generates.

Conversation history

All conversations are saved automatically. The sidebar lists every chat you have with the agent, and a filter at the top lets you switch between all chats and starred chats only. A List view link opens a full chats page where you can search, sort, and bulk-manage conversations. From the chats page or the sidebar you can:
  • Continue a previous conversation by opening it
  • Rename a chat
  • Star important chats so they stay at the top in a dedicated Starred section
  • Delete chats individually, or select multiple on the chats page and bulk delete
  • Share conversation links with teammates

Diary

When diary writing is enabled, the agent automatically writes diary entries when significant events happen during a conversation. This is a high-level record of the agent’s activities and internal reasoning. Click “Diary & Approvals” in the sidebar to view the diary entries. The agent is also able to read its own diary entries, but will only do so if asked, or if it needs some specific information from the diary.
The agent decides when to write a diary entry based on the conversation content and the agent’s internal reasoning. For triggers and other events, diary entries are also written automatically as long as diary writing is enabled for that agent.
See Activity monitoring for more on diary entries and activity monitoring.

Model switching

You can switch LLM in the middle of a conversation. For example if you are having a conversation that doesn’t require deep reasoning, you can switch to a cheaper model to make the responses faster and cheaper. See Model selection for more on model switching.