Monday, 14 August 2023

#5 AI Notes: The Power of Context and Other Insights on Building a GPT-driven Website Chatbot

**AI and the Considerations for Leveraging OpenAI**

In this article, we’ll discuss using OpenAI’s language model and the considerations involved in developing a chatbot. We’ll explore the purpose, context, data operations/storage, and limitations when leveraging AI. These insights were gained from our own experience developing a GPT-powered website chatbot for Insignia Ventures. Let’s dive in!

**1. Purpose: What is it for?**

When considering using OpenAI, it’s important to determine the purpose and desired input/output of the model. OpenAI offers various options, including GPT-4, GPT-3.5 Turbo, DALL-E, and Whisper, each with different costs and capabilities.

For a website chatbot like ours, GPT-3.5 Turbo proved sufficient for simple dialogues with predefined questions and answers.

**2. Context: How much data is needed?**

Understanding the amount and form of data required is crucial. With language models, the cost varies based on the number of tokens, ranging from 4K tokens for GPT-3.5 Turbo to 32K tokens for GPT-4. Tokens are syllables, so this impacts the expenses.

To optimize costs, one approach is to utilize the Ada embedding model, also by OpenAI. By converting questions and context to matrices for cosine similarity comparison, the use of GPT becomes more cost-efficient. Categorization and adding weights to specific contexts can improve efficiency further.

**3. Data Operations/Storage: What are the requirements?**

Consider the data operations and storage needs for your use case. Latency and reliability depend on the chosen model. For our website chatbot, we stored common questions and OpenAI answers in a database. Using a rating feature, we decided when to ask OpenAI for updates.

**Five Key Learnings**

Now, let’s explore five key learnings we gained from our experience in developing the chatbot with OpenAI:

**1. AI may not be necessary for every point in the user experience**

Prioritize pain points and the user journey when evaluating the need for AI. Rote automation might be more cost-effective than relying on machine learning. Forcing AI in certain situations can worsen the user experience on a large scale. Carefully assess if AI is truly necessary.

**2. Costs can vary based on context and complexity**

Consider all options, including using an API or developing a proprietary model. Evaluating long-term costs and budget is essential. Depending on the size of the context and the task complexity, one approach may be more cost-effective.

**3. Latency and storage trade-offs should be considered**

In some cases, faster response times require more in-house storage usage. This trade-off must be weighed against the budget and costs involved. Base models may offer better latency, or if using an in-house model, storage costs are already accounted for.

**4. Context is king, and feeding it is an ongoing process**

GPT requires continuous input and updates to learn effectively. Feeding context can be labor-intensive, depending on the budget and use case. Regularly provide new inputs beyond user engagement, such as market data for financial advisor AI or legal precedents for legal advisor AI. Incorporate this “feeding” process into your data operations.

**5. Understand the limitations of scaled products**

Scaled products have limitations in terms of requests per minute, latency, context size, and more. Consider these limitations when developing the user experience around your use case. Workarounds for costs can impact the delivery of a user-friendly experience. Evaluate how much the use case relies on third-party infrastructure and in-house capabilities.

**Editor Notes: Opinion Piece**

In leveraging OpenAI for AI applications, careful consideration of purpose, context, data operations/storage, and limitations is crucial. Determining if AI is truly necessary, evaluating costs, and understanding trade-offs are key aspects. Additionally, ongoing feeding of context and understanding the limitations of scaled products are essential for success.

As AI continues to evolve, it’s important to stay up-to-date with the latest developments and best practices. To learn more about AI and its applications, visit the GPT News Room at https://gptnewsroom.com.

**References:**

“AI and the Considerations for Leveraging OpenAI”, Insignia Ventures, https://insignia.vc.

**Disclaimer:** The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of GPT News Room.

Source link



from GPT News Room https://ift.tt/M1COUFm

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...