Root Code: First Impressions of New Experimental Features
Root Code has introduced two new experimental features: codebase indexing and context condensing. This article shares a first look at these features, highlighting their potential and areas for improvement. As an open-source project, contributions to Root Code are welcome on GitHub.
Codebase Indexing
Overview
Codebase indexing allows users to index their entire codebase using either OpenAI or O Lama. The data is stored in Qrant, granting access to a new tool called codebase search. This feature was developed by Daniel LXS, and the speaker expresses gratitude for his contribution.
Setup and Configuration
Setting up codebase indexing involves a few key steps:
- Docker Desktop (or Cloud Qrant): Install Docker Desktop or use a cloud-based Qrant instance. The speaker used a local Docker Desktop setup.
- Configure OpenAI or O Lama: Choose and configure either OpenAI or O Lama for embedding. The speaker tested both and found them to work well.
- Enable Indexing: Enable codebase indexing in the Root Code settings.
The speaker emphasizes that the local embedding setup is particularly appealing.
Usage and Observations
It's crucial to explicitly tell Root Code to use the codebase search tool. For instance, phrasing a question as "Using codebase search, tell me..." will trigger the tool. Local embedding performed comparably to OpenAI in testing. The speaker noted that multiple iterations were sometimes needed to find the correct answer. They attempted to have the tool activated automatically based on the prompt given, but found it was necessary to specifically ask for it to be used.
Integration with Devstrol
Devstrol, running in ask mode with local embeddings, also benefited from the codebase indexing. When prompted to use the codebase search tool, it provided excellent answers.
Comparison with Augment Code
When compared to Augment Code, Root Code (with Devstrol) required the user to explicitly request the use of the codebase search tool. Augment Code seemed to automatically utilize its context engine. However, with Claude 4, Root Code's performance was impressive, providing diagrams and detailed flow summaries.
In one test, Root Code significantly outperformed Augment Code in identifying APIs associated with a specific role. Augment Code's chat mode seemed to be missing context in this particular use case.
Context Condensing
Overview
Context condensing is a feature that reduces the context window size, similar to features found in Cloud Code and Klein. There are two versions: auto-condensing and manually triggered condensing. The speaker prefers the triggered option.
Functionality and Performance
The condensing tool effectively reduces the context window. In one test using Devstrol, the context window was reduced from 53,000 to 23,000. Asking subsequent questions revealed that the tool retained context well. Another test with Anthropic's Claude 4 reduced the context window from 103,000 to 62,000 while maintaining relevant information.
Areas for Improvement
Per-Codebase Indexing Option
The speaker suggests adding a per-codebase indexing option, similar to Augment Code. This would allow users to selectively enable indexing for specific projects or repositories.
UI Feedback
The UI feedback during indexing could be improved. There were instances where the UI appeared frozen, leading to uncertainty about the process's progress. Additional feedback, such as animations, would be beneficial.
File Watcher
While a file watcher is present after codebase indexing, there is no clear indication of its functionality or which files have been indexed. A file viewer showing indexed files and their last indexing time would be a valuable addition.
LM Studio Support
Adding support for LM Studio would be a welcome enhancement, given its configurability compared to O Lama.
Final Thoughts
The new experimental features in Root Code are easy to set up and show great promise. Comparing Root Code with Augment Code was insightful, and the codebase search tool has the potential to be a significant asset. The speaker encourages users to explore these features and contribute to the project on GitHub. They also invite suggestions for other AI tools to explore.