How to Integrate Llama 3 in Your C# Projects: A Step-by-Step Guide

How to Integrate Llama 3 in Your C# Projects: A Step-by-Step Guide

·

4 min read

Meta recently launched the third version of its Large Language Model (LLM), named Llama 3. This open-source model is a major development in the artificial intelligence field, providing three various versions that can be operated on your computer.

This offers C# developers a special chance to utilize advanced AI technology without depending on Python, which is frequently used in many instances.

This article will discuss integrating Llama 3 into your C# projects, starting from configuring your environment to developing a working console application.

Preparing Your Environment

In order to use Llama in your C# projects, you need to use an open source library called Ollama. This library simplifies the execution of LLMs locally. You can install it from their project page - https://github.com/ollama/ollama.

Once installed, you will see a notification at the bottom right of your screen informing you that Ollama is running and asking you to click to use it. Clicking on that message will take you to a command prompt much like the image below:

If you miss that notification, you can open a command line and run the same command:

ollama run llama3

Once you run this command, it will download the Llama 3 8B model to your computer. This model will take up about 5GB space. There is a larger model available called 70B which is 40GB.

We will use the 8B model in this article however, if you have the hardware for it, you can use the 70B model. A word of caution though: to run the 70B model, you will need some serious hardware.

If you want to install the 70B model, just specify so in the command:

ollama run llama3:70B

Once the model is downloaded, you are now ready to use the model locally. However, you may want to try it out first to make sure everything is working OK.

You can either enter a prompt in the command line, or you can install a web interface.

After installation, just enter a prompt, such as:

So far, so good.

As I mentioned above, you can also install a web interface called Chatbot Ollama, found at this address: https://github.com/ivanfioravanti/chatbot-ollama

To install this web interface, create a folder on your computer and run the following command in that folder.

git clone https://github.com/ivanfioravanti/chatbot-ollama.git

After cloning the repository, change your directory (cd command) into the repository directory and run the following commands:

npm ci
npm run dev

Once everything is done, then go to the address it tells you to go to.

And voila! You have your interface:

C# Console Application Using Local Llama

So far, so good, but we have not done anything C# specific yet. We downloaded and installed the model and confirmed that it is working locally.

Let's use this model in our app now.

Create a console application and add a library called OllamaSharp via NuGet.

Install-Package OllamaSharp

And then replace the code in your Program.cs with the following code:

using OllamaSharp;

namespace LlamaSample;

internal class Program
{
    private static async Task Main(string[] args)
    {
        await GenerateText();
        //await CustomTask();
    }

    private static async Task GenerateText()
    {
        var uri = new Uri("http://localhost:11434");
        var ollama = new OllamaApiClient(uri)
        {
            SelectedModel = "llama3"
        };

        var prompt = "Once upon a time, in a land far, far away...";
        ConversationContext? context = null;
        context = await ollama.StreamCompletion(prompt, context, stream => Console.Write(stream?.Response));
        Console.ReadLine();
    }

    private static async Task CustomTask()
    {
        var uri = new Uri("http://localhost:11434");
        var ollama = new OllamaApiClient(uri)
        {
            SelectedModel = "llama3"
        };
        var prompt = "Translate the following English sentence to French: 'Hello, world!'";
        ConversationContext? context = null;
        context = await ollama.StreamCompletion(prompt, context, stream => Console.Write(stream?.Response));
        Console.ReadLine();
    }
}

Run the code with GenerateText and then with CustomTask, and you will see that your local LLM is responding nicely.

If you want to explore further what you can do with Ollama Sharp, you can find the code, along with some examples here: https://github.com/awaescher/OllamaSharp

Incorporating Llama 3 into your C# projects unlocks endless opportunities, from improving current applications to developing brand new ones with advanced AI capabilities. Having the option to run these models on your own machine offers flexibility and autonomy. I hope this article provided a nice way to start developing your own solutions in C# using Llama 3.

Finally, here is the source code for this article: https://github.com/tjgokcen/LlamaSample