Introduction
Hello everyone, this is Best Partner, and I'm Da Fei. In recent years, the conversation in the AI field has shifted from "whether to do AI" to "how to do AI well." Since the explosion of generative AI in 2023 until today in 2025, more and more companies are focusing on practical issues such as product development, team building, and cost control, rather than being entangled in the possibilities of AI. Recently, ICONIQ Capital, which once managed the family offices of many Silicon Valley tech giants, seized the opportunity of this turning point and released a 68-page report, dissecting the practical experience of 300 software companies developing AI products. Whether it's AI-native companies or so-called AI-enabled companies, their lessons learned and successful practices can bring us a lot of inspiration. Today, Da Fei will share this with you.
Types of AI Companies
To talk about AI product development, we first need to understand what types of AI companies are currently on the market. According to the report's research, there are mainly two types: AI-native companies and AI-enabled companies.
AI-Native Companies
AI-native companies refer to companies whose core products or business models are completely driven by AI. Their value almost entirely comes from model training, reasoning, and continuous learning. These companies account for 32% of the research. Their products have a very fast iteration speed. The report mentions that only 1% of AI-native companies are still in the pre-product release stage, while 11% of AI-enabled companies are stuck in this stage. More importantly, 47% of AI-native products have entered the scale-up stage, which means that the products have verified market fit and are rapidly expanding their user base and infrastructure. The possible reason behind this is that AI-native companies have advantages in team composition, infrastructure, and financing models, and can skip the trial-and-error stage faster.
AI-Enabled Companies
AI-enabled companies are divided into two types. One is to embed AI functions in flagship products, such as adding an AI recommendation module to an existing CRM system. These companies account for 31%. The other is to develop AI products independent of the core business, such as a collaboration tool company launching an AI writing assistant in addition. These companies account for 37%. For these companies, AI is more like a tool to enhance the value of existing products rather than the whole. For example, traditional SaaS giants like Salesforce and Atlassian are now adding AI functions to their core products to improve automation, personalization, and user productivity, but the underlying business model and user experience have not changed significantly.
Product Development
The differences between these two paths determine their differences in product development, team building, and even cost structure from the very beginning. Next, we will look at what these companies are doing and what problems they are facing from the specific aspects of product development.
Product Types
Whether it's AI-native or AI-enabled companies, the most popular product types at present are basically divided into two categories: agent workflows and application-layer products. Among them, 79% of AI-native companies are working on agent workflows. The so-called agent workflow simply means letting AI act like an "agent" to autonomously complete a series of tasks, such as automatically handling the entire process of customer inquiries, from understanding the problem to finding information to generating a response, and even being able to adjust strategies according to user feedback. In addition, vertical and horizontal AI applications are also very popular. 65% and 56% of AI-native companies are developing these applications, respectively, while the proportions of AI-enabled companies are relatively low, at 49% and 40% respectively. This also reflects the different positioning of the two types of companies. AI-native companies are more focused on solving specific business process problems through AI, while AI-enabled companies hope to enhance the versatility of existing platforms through AI.
Model Use
In terms of model use, 80% of companies rely on third-party AI APIs, such as GPT and Claude. However, high-growth companies are obviously more "aggressive." 77% of high-growth companies will fine-tune existing foundation models, and 54% will develop proprietary models from scratch, while the proportions of other companies are 61% and 32% respectively. Why is there such a difference? High-growth companies usually have more resources and need to provide in-depth customization services for enterprise customers and adjust models according to customer data and needs. At this time, fine-tuning or self-development becomes a necessary option. Companies with limited resources are more inclined to use third-party APIs directly, which can bring products to market the fastest and reduce upfront investment.
Model Providers
From the perspective of model providers, OpenAI's GPT model is still the absolute mainstream, used by 95% of full-stack AI companies. The second is Anthropic's Claude and Google's Gemini. It is worth noting that the average company uses 2.8 models, that is, the "multi-model strategy" is becoming more and more common. For example, some companies choose GPT-4 to handle simple text generation, Claude to handle long-document analysis, and Gemini to handle multi-modal tasks. This can not only optimize performance but also control costs and avoid over-reliance on a single supplier.
Model Selection Priorities
When choosing a model, the priorities in different scenarios are also completely different. If it is a customer-facing product, accuracy is absolutely the first. 74% of companies rank it first, while cost ranks second, accounting for 57%. This is very different from last year's situation. Last year, cost was almost the least important factor in customer products. This year's rise in ranking may be because the emergence of low-cost models like DeepSeek makes cost a more critical competitive factor. However, if it is an internal AI tool, cost becomes the top consideration, accounting for 74%, followed by accuracy and privacy. After all, internal tools do not directly generate revenue, and cost control is more important. Moreover, internal data often involves confidential information, so privacy protection naturally becomes a key point.
Training and Adaptation Technologies
In terms of training and adaptation technologies, the most commonly used are Retrieval-Augmented Generation (RAG) and fine-tuning, used by 69% and 67% of companies respectively. High-growth companies also particularly like to use prompt engineering techniques, such as few-shot learning and zero-shot learning. This may be because they need to quickly adapt to the needs of different customers, and prompt engineering techniques can adjust the output without retraining the model. Compared with last year, the number of companies using RAG and fine-tuning has increased significantly. Logically speaking, as foundation models become stronger, the need for fine-tuning should decrease. However, the actual situation is that enterprise customers have higher requirements for customization, and at this time, fine-tuning is still necessary.
Model Deployment Challenges
When deploying models, the three biggest challenges are hallucination, interpretability and trust, and proving return on investment (ROI), accounting for 39%, 38%, and 34% respectively. In addition, computing costs and security are also relatively large problems. For example, after an AI product is launched, if the user volume suddenly increases, the API call fee may soar, directly eating up profits. And if the model is maliciously attacked and generates harmful content, it may also face legal risks.
Infrastructure
In terms of infrastructure, most companies have chosen a "light asset" model. 68% of companies use cloud services entirely, and 64% rely on external AI API providers. The advantages of this are obvious. It can reduce upfront investment, eliminate the need to maintain servers, and quickly launch products. However, this also brings new problems. Supplier selection, service level agreement (SLA) negotiation, and cost management based on the number of calls have all become strategic issues. If the API provider suddenly raises prices or the service is interrupted, the entire product will be affected. Therefore, many companies will sign long-term agreements with suppliers and even consider backup plans. Only 23% of companies use a hybrid cloud + local model, and less than 10% use local infrastructure entirely. These companies often have some special needs. For example, financial institutions must store data on their own servers due to compliance requirements. Or, for applications with extremely high real-time requirements, such as AI models for autonomous driving, local computing is required to reduce latency.
Market and Pricing
In the market, the most popular pricing model now is a hybrid pricing model, accounting for 38%, that is, combining subscription-based and usage-based or outcome-based pricing. For example, basic functions are charged a monthly fee, and additional fees are charged when the usage exceeds a certain amount, or a percentage is taken based on the cost saved by AI for customers. Most AI-enabled companies regard AI as a "value-added item." 40% are placed in high-end packages, and 33% are provided free of charge. This is actually using AI as a tool to attract customers to upgrade or prevent customer churn rather than the main source of revenue. For example, many SaaS tools will say that the "premium version includes AI analysis functions" to promote customers to upgrade from the basic version. However, the report points out that this model may not last long. As the cost of AI gets higher and higher, providing it for free will compress the profit margin of the product. And there are great differences in user usage. Heavy users use more and have high costs, while light users use less and have low revenues, which makes pricing very awkward. Therefore, 37% of companies plan to adjust their pricing in the next 12 months, such as switching to a more flexible usage-based pricing or pricing based on the specific value brought by AI.
Transparency and Compliance
As AI products scale up, transparency is becoming more and more important. In the product scale-up stage, 25% of companies will provide detailed model transparency reports, and 47% will explain how AI affects the results. In the pre-launch stage, only 6% will provide detailed reports. This is actually very reasonable, indicating that the more mature the product, the more customers will require to know the working principle of AI, especially for enterprise customers. In terms of compliance, only 13% have a dedicated AI compliance team, 29% of companies have a formal AI ethics and governance policy, and 47% at least comply with data privacy laws such as GDPR and CCPA. This shows that many companies are still "dealing with" compliance rather than actively building a compliance system. As AI regulations become stricter in various countries, this may become a risk point in the future. In addition, 66% of companies use a human-in-the-loop approach to ensure the fairness and safety of AI. Simply put, humans are involved in key decision-making processes for review. For example, an AI-generated contract will be checked by a lawyer again.
Team Building
In terms of teams, the larger the company, the more likely it is to have a dedicated AI leader, such as a Chief AI Officer or a Machine Learning Director. Among companies with an annual revenue of more than 100 million US dollars, at least 50% have a dedicated AI leader, while only 33% of companies with an annual revenue of less than 100 million US dollars have one. This is because as the company grows in scale, AI business becomes more complex, and someone is needed to coordinate strategy, technology, and compliance. Smaller companies may have the CTO or product manager directly handle it. In terms of positions, the most common AI positions currently are AI/machine learning engineers, with 88% of companies having them. Data scientists account for 72%, and AI product managers account for 67%. However, recruiting is difficult. AI/machine learning engineers take an average of 70 days to recruit, data scientists take 68 days, and AI product managers take 67 days. This is much more difficult than recruiting ordinary engineers, mainly because there are few qualified candidates and the competition is fierce. Some emerging positions are also on the rise, such as prompt engineers and AI design experts. These positions require both technical and business knowledge, so they have now become popular. However, 46% of companies feel that recruiting is not fast enough, mainly due to the "lack of qualified candidates," followed by high costs and fierce competition. A technical director of a company said that they want to recruit engineers with large-scale model deployment experience, but there are too few such people in the market. Those with a little experience have salary requirements more than 50% higher than ordinary engineers and are often poached. Therefore, to deal with this, many companies will train their own talents, such as letting existing engineers participate in AI training or cooperating with universities in internship programs. On average, companies plan to have 20%-30% of their engineers focus on AI. High-growth companies will be even higher, even reaching 37%. This shows that AI has changed from a "marginal project" to a "core business" and requires enough engineers to invest.
Cost
In terms of costs, the AI development cost of AI-enabled companies accounts for about 10% to 20% of the R&D budget. The higher the annual revenue, the relatively lower the proportion. In 2024, companies with an annual revenue of more than 100 million US dollars were about 10% to 15%, while companies with an annual revenue of less than 100 million US dollars were 14%. This may be because large companies have a large R&D base, and AI is only a part of it. Smaller companies are more focused and willing to invest more in AI. However, this proportion has increased significantly in 2025, generally increasing by 5% to 10%, indicating that everyone is increasing their investment in AI. So where is the money spent? The cost structure is also different at different stages. Before product launch, 57% of the AI budget is spent on talent because at this time, the main task is to build a team and do R&D. In the scale-up stage, the proportion of talent costs will drop to 36%, while infrastructure and cloud costs will rise to 22%, and model reasoning costs will rise to 13%. This is because as the number of users increases, "variable costs" such as servers and API calls also increase. Among them, API usage fees are the most difficult to control, with 70% of companies ranking it first, followed by reasoning costs, model retraining, and updates. API usage fees are difficult to control because they are directly related to user usage, and user behavior is difficult to predict. For example, if there is a sudden event, the user volume will soar, and the API fee may double. To save money, 41% of companies are using open-source models like Llama 3, 37% are trying to optimize reasoning efficiency, such as compressing model size, and 28% are using model distillation or quantization techniques. For example, some companies have compressed a large-scale model from 70 billion parameters to 7 billion, reducing the reasoning cost by 70% while only reducing the performance by 5%, which is completely acceptable for many scenarios. The monthly cost of model training will also increase as the product matures. Before launch, the average cost is 163,000 US dollars, and in the scale-up stage, it is 1.5 million US dollars. Reasoning costs have increased even more sharply. In the scale-up stage, high-growth companies spend 2.3 million US dollars per month, and ordinary companies also spend 1.1 million US dollars. Data storage and processing are also not cheap. In the scale-up stage, high-growth companies spend 2.6 million and 2 million US dollars per month, respectively, while ordinary companies spend 1.9 million and 1.8 million US dollars. From these figures, we can see that the "scale-up cost" of AI products is very high. It is not enough to just develop the model; sufficient funds are also needed to support the later operation. This is also the reason why many start-ups need large-scale financing.
Internal AI Use
In addition to developing external AI products, companies are also using AI to improve internal efficiency. This part of the budget has almost doubled in 2025. For companies with an annual revenue of more than 1 billion US dollars, the internal AI productivity budget has increased from an average of 3.42 million US dollars in 2024 to 6.04 million US dollars in 2025. Within the company, about 70% of employees can access AI tools, but only 50% will continue to use them. It is more difficult to promote in large companies. Among companies with an annual revenue of more than 1 billion US dollars, only 44% of employees continue to use AI, while in small companies, it is 57%. This may be because large companies have complex processes, employees' habits are difficult to change, and there are more concerns about data security. Don Vu, the Chief Data and Analytics Officer of New York Life, said that simply deploying tools is definitely not enough, especially for large enterprises. To get employees to actually use them, training is needed, and the most active AI users need to be found to take the lead. The most important thing is the continuous support of senior management, otherwise, it is easy to end up in nothing.
Internal AI Scenarios
Within the enterprise's R&D department, the most commonly used AI scenarios are coding assistance, content generation and writing assistants, and document and knowledge retrieval. The most effective one is also coding assistance. 65% of companies believe that it has the greatest impact on productivity. Among high-growth companies, 33% of the code is already written by AI, and in ordinary companies, it is 27%. However, the challenges are also obvious. 46% of companies say they "cannot find suitable usage scenarios," and 42% think it is "difficult to prove ROI."
Measuring Internal AI Effectiveness
So how exactly should we measure the effectiveness of internal AI? 75% of companies will track productivity improvements, 51% will track cost savings, but only 20% will track revenue growth. After all, internal tools usually do not directly create revenue. In terms of specific methods, 14% only track quantitative indicators, 16% only track qualitative indicators, and 30% track both quantitative and qualitative indicators. However, there are still 17% of companies that have not started measuring at all. This is actually very dangerous because if you don't know the effect, you don't know whether to optimize or give up.
AI Tool Stack
In terms of the AI tool stack, PyTorch and TensorFlow, the two deep learning frameworks, are the most popular, accounting for more than half of the usage. However, hosting platforms are not far behind. AWS SageMaker and OpenAI's fine-tuning service are also widely used, indicating that the team is divided into two camps. One camp likes to use frameworks and control the entire process themselves, while the other camp likes to use hosting services for convenience. The Hugging Face ecosystem and Databricks' Mosaic AI Training are also on the rise. They provide some higher-level tools to make training easier, such as not having to write complex distributed training code and directly calling APIs.
Development Tools
In terms of development, LangChain and Hugging Face's toolset are the most popular because they can simplify tasks such as prompt engineering chains, batch processing, and model interfaces. 70% of companies will also use private or custom APIs, indicating that many companies will do secondary development based on public models and then encapsulate them into their own APIs for internal use.
Security Tools
Security tools are also becoming more and more important. 30% of companies will use Guardrails for security checks to prevent AI from generating harmful content, and 23% will use Vercel's AI SDK for rapid deployment. These tools can make applications more stable and compliant.
Monitoring and Observation Tools
In terms of monitoring and observation, nearly half of the companies are still using traditional APM tools, such as Datadog and New Relic, rather than specialized AI monitoring tools. This may be because these tools have been integrated into the existing process, and the team does not want to learn new tools. However, specialized AI monitoring tools are also growing. LangSmith and Weights & Biases each have a 17% usage rate. They can track the effect of prompt engineering, detect model drift, etc., which are things that traditional tools cannot do.
Inference Optimization Tools
In terms of inference optimization, NVIDIA's TensorRT and Triton Inference Server account for more than 60% combined, indicating that NVIDIA almost monopolizes the inference optimization field. After all, their GPUs can be said to be the industry benchmark, and the software and hardware are well-coordinated to achieve the ultimate in speed and efficiency. Among non-NVIDIA solutions, ONNX Runtime accounts for 18%. Its advantage is that it can run across hardware, including CPUs, GPUs, and other accelerators, which is suitable for companies that do not want to be bound by NVIDIA.
Data Processing and Feature Engineering Tools
In terms of data processing and feature engineering, Apache Spark and Kafka are the absolute mainstays, used by 44% and 42% of companies respectively. Spark is suitable for large-scale batch processing, and Kafka is suitable for real-time stream processing. These two are almost the standard for big data processing. However, for small-scale data processing, Pandas is still indispensable, used by 41% of companies. It is simple and flexible, suitable for rapid analysis and prototype development.
Coding Assistance Tools
In terms of coding assistance, GitHub Copilot almost monopolizes the market, used by 75% of development teams. It is deeply integrated with VS Code, supports multiple languages, and has been trained on GitHub's massive code, so the effect is indeed good. Cursor ranks second, used by 50% of companies. It is more focused on the AI-driven editing experience, such as real-time code refactoring, which is very convenient for many developers. Although other tools, such as Codeium and Sourcegraph, also have users, their market share is far less than the top two, indicating that the coding assistance tool market has formed a "duopoly" pattern.
Conclusion
To sum up, according to this report, AI development in 2025 has entered the "deep water area." It is no longer about who can make an AI function first, but about who can make AI products stable, compliant, and economical, and at the same time build a team and system that can continuously innovate. AI-native companies are running faster with their inherent advantages, but AI-enabled companies can also find their own position by embedding AI in existing products. In terms of model selection, the multi-model strategy has become the mainstream, and cost and customization have become the key to competition. Pricing and compliance are becoming more and more complex, and it is necessary to balance user experience, cost, and regulatory requirements. Talent is still the biggest bottleneck, especially compound talents who understand both technology and business. For companies that want to enter the market, there are several lessons worth learning. First, it is necessary to clarify the specific problems that AI can solve, and not do AI for the sake of AI. Second, it is necessary to control API costs to avoid losing profits when the scale increases. Third, it is necessary to build a compliance system as soon as possible and not wait until regulations come. Finally, it is necessary to attach importance to the implementation of internal AI tools, as improving team efficiency is often more important than external publicity. Today, AI is no longer a future trend but a daily reality. How to do it well tests not only technical capabilities but also strategic vision, organizational capabilities, and cost awareness. I hope today's content can give you some inspiration, whether in entrepreneurship or the workplace, to better grasp the opportunities brought by AI. Thank you for watching this video, and we will see you next time.