Articles about Ai

Implementing Semantic Search with Sequel and pgvector

In my previous post, An LLM-based AI Assistant for the FastRuby.io Newsletter, I introduced an AI-powered assistant we built with Sinatra to help our marketing team write summaries of blog posts for our newsletter.

In this post, I’ll go over how we implemented semantic search using pgvector and Sequel to fetch examples of previous summaries based on article content.

Semantic search allows our AI assistant to find the most relevant past examples, given meaning and context, when generating new summaries. This helps ensure consistency in tone and style while providing context-aware results that will serve as better examples for the large language modal (LLM) to generate new summaries, improving the quality of the generated output.

Read more »

An LLM-based AI Assistant for the FastRuby.io Newsletter

Every other week, the FastRuby.io newsletter brings a curated list of the best Ruby and Rails articles, tutorials, and news to your inbox.

Our engineering team collects links to interesting articles and our marketing team curates them, writes a summary for each article, and creates the newsletter. This process is quite manual, and involves some back and forth to ensure summaries are accurate, engaging, and relevant to our audience.

To make if more efficient, we have developed an AI assistant that helps us curate articles and generate the summaries for the newsletter.

Read more »

The South by Southwest EDU Conference and AI in Education

I attended the South by Southwest EDU conference in Austin, Texas, for the first time this year and it was a great experience.

I had the opportunity to connect with several professionals in higher education, education technology and AI in education, and I learned a lot about the challenges and opportunities in the field.

As a company providing custom AI solutions, it’s very important that we understand the needs of institutions, educators and students, their concerns regarding artificial intelligence, and how we can help them leverage AI to solve real problems while protecting privacy and ensuring fairness.

Read more »

A Deep Dive into Prompt Engineering Techniques: Part 1

Large Language Models (LLMs) are widely available and easily accessible and are increasingly a part of business. Whether you’re interacting with an LLM via the provided interface or connecting via an API and integrating it into other systems, it’s helpful to understand how to get the best possible results out of the model.

Prompt Engineering is a technique that focuses on perfecting your input to get the best possible output out of the language model. Of all the different techniques available to get LLMs to fit your use case best, it’s the most straightforward one to implement since it focuses primarily on improving the content of the input. In this Part I article, we’ll dive into different Prompt Engineering techniques and how to leverage them to write highly effective prompts, focusing on single prompt and chain techniques. In our following article, we’ll cover agents and multi-modal techniques.

For other available techniques to enhance LLM capabilities, check out our Techniques to Enhance the Capabilities of LLMs for your Specific Use Case article!

New to LLMs? Check out this article on the landscape by our friends over at Shift: Guest Post: Navigating the AI Chatbot Landscape.

Read more »

Guest Post: Navigating the AI Chatbot Landscape

We often partner with our friends at Shift Interactive when we need an extra set of hands or expertise to complement our own. Recently we’ve been collaborating with them on interesting Artificial Intelligence and Machine Learning things. Check out their recent blog post below or here to get an overview of the AI chatbot landscape.

You can also check out the next article in this series: Techniques to Enhance the Capabilities of LLMs for your Specific Use Case.

Read more »