Running Large Language Models (LLMs) Locally using Ollama

Over the weekend I was reading this post on the Oracle Linux Blog.

It looked pretty simple, so I thought I would give it a go, and that lead me down a rabbit hole for the next few days. 🙂

Why would anyone want to run LLMs locally?

There are potential issues with using a cloud-based LLM.

  • Security: Passing sensitive data or your company intellectual property to the cloud could be a problem. Using a local LLM means you can do pretty much what you want with it. It also means your prompts are not being stored and available to the outside world when the provider of your cloud-based LLM inevitably gets hacked, or sells your details.
  • Consistency: The cloud vendors update their models from time to time, and you may find a “new and improved” version of a model works less well for what you are doing. Just look what the “upgrade” to ChatGPT 5 broke. Using a local LLM means you are in control of if/when you update models.
  • Cost: You have the cost of the local kit, but after that it’s all “free”. Running a model requires a lot less kit than training a model. Depending on your needs, maybe running something locally will make more sense than paying for a cloud subscription. It can get really expensive if you require a large call volume on a cloud subscription.
  • Choice: There are lots of open models to choose from (see here). Maybe one of the more targeted models will work better for you than an all purpose model.

Apart from anything else, it’s just fun, which is why I’m doing it. 🙂

My (Bad) Kit

I don’t have any suitable kit for this, and I don’t like installing things directly on my laptop if I can help it, so I decided to use a minimum installation of Oracle Linux 9 on VirtualBox. I built it using Vagrant, but more about that later. I subsequently tried OL10 and Fedora 43 with similar results.

In the end the VM specs were as follows.

  • OS: OL9
  • vCPUs: 4
  • RAM:10240M (or 16384M for for gpt-oss:20b and codestral)

During the Ollama installation I got this message.

WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.

Suffice to say, this was not going to perform well, but I didn’t care. I just wanted to see it running locally. Clearly if you have better kit, and can throw your GPU and loads of memory at it, it’s going to work better!

Basic Installation

Ollama is a tool that allows you to work with a range of open Large Language Models (LLMs). It’s not a model itself. Just a tool that allows you to work with them in a consistent manner.

Ollama is available for Linux, Windows and Mac, so you can install it on just about anything. If you want to download it for Windows and Mac, you can get it from here.

On Linux we can install it using a single command. The basic installation worked fine, but there were some subsequent issues. This command does the whole installation, and creates a Linux service called “ollama”.

curl -fsSL https://ollama.com/install.sh | sh

I set up port forwarding in the VM for port 11434, so I expected to be able to access the Ollama API from my host machine, but that didn’t work. After a little digging I found out that the “ollama” service is limited to localhost by default, so you can’t access it externally. To fix this I went nuclear and added the following to the “ollama” service config file (/etc/systemd/system/ollama.service).

Environment="OLLAMA_HOST=0.0.0.0:11434"

That means any network can access it, so you would need to protect it with a local or network firewall if you were doing this for real.

With Ollama in place, it was time to pick a model. I started by using llama3, as they did in the blog post.

ollama pull llama3

It took a while to download, but as soon as it was, it was visible in my list of models using this API URL from curl in a terminal, or from a browser on the host machine.

http://localhost:11434/api/tags

There are a couple of ways to interact with the model. You can use the command line interface to speak to it by running this command.

ollama run llama3

Alternatively, you can issue the questions through the API, in streaming or non-streaming mode.

# Streaming.
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "In less than 50 words, what is Star Trek about?"
}'

# Non-streaming.
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "In less than 50 words, what is Star Trek about?",
"stream": false
}'

The API works fine, but the output is not ideal from a readability perspective, as they are expecting you to interact with it programmatically and format the output. That lead to the next rabbit hole.

Open WebUI

The blog post mentioned using Open WebUI as an interface. I tried the suggested installation, but got into the weeds with conflicting package versions. Instead I decided to use the Open WebUI container, which made life a lot easier. I installed Podman and started the container like this.

dnf install -y podman

podman pull ghcr.io/open-webui/open-webui:main
mkdir -p /u01/open-webui
podman run -d -p 8080:8080 \
           -v /u01/open-webui:/app/backend/data \
           --name open-webui \
           -e OLLAMA_HOST=host.containers.internal \
           -e OLLAMA_PORT=11434 \
           --add-host host.containers.internal:host-gateway \
           ghcr.io/open-webui/open-webui:main

Notice I had to expose the host machines localhost to the container, where it could be referenced as “host.containers.internal”. This was then set in the OLLAMA_HOST environment variable inside the container.

The combination of exposing the host localhost to the container and opening up Ollama’s API to external access took far too much messing about to sort out. As always, it’s easy when you know how. 🙂

I now had a neat web interface to access any models using this URL.

http://localhost:8080/

Which models?

As mentioned previously, I began with the llama3 model, but then I started to compare several other models.

# General models.
ollama pull llama3
ollama pull deepseek-r1
ollama pull gpt-oss:20b

# Code-focussed models.
ollama pull codellama
ollama pull deepseek-coder
ollama pull codestral

This gave me the choice of 3 general models, and 3 code-focussed models. I picked the smaller ChatGPT model, and it still required more memory than the others. I don’t think I could have run the gpt-oss:120b model.

There are lots of open models available via the Ollama library (see here).

Vagrant

As mentioned previously, I built my playground using Vagrant, and you can see what I did here.

The VM itself is just a simple shell. There are two setup scripts you can run once connected as root. I left them as manual calls so I could mess around with them during each build. Their calls are commented out in the “root_setup.sh” file, but you could put them back in if you prefer.

ssh vagrant@localhost -p 2222
sudo su -

sh /vagrant/scripts/setup_ollama.sh
sh /vagrant/scripts/setup_open_webui.sh

If I included them in the build directly, you would be waiting a long time for the VM build to complete, and I didn’t want that. 🙂

I’ve left this as simple as possible. It’s not meant as an example of the best way to build this. You have been warned. 🙂

What did I do with it?

I was speaking to a colleague about it, and he said Claude Code was baked into his IDE (JetBrains) and he had used it recently to help with some C# code. He used to be a C# developer, but hadn’t done much over the last few years, so he was asking it some basic questions to remind himself. Since his was a “real” example of what someone might ask, I thought it might be a good idea to compare the results. You can see the comparison for a simple question in the section at the end if you are interested.

Overall Thoughts

Ollama works fine. It’s easy to setup and gives access to a whole bunch of models to play around with. After a bit of fiddling, Open WebUI worked fine and gave a much nicer interface than using the command line, or the API directly. It was worth the hassle.

Clearly running LLMs locally on my old laptop inside a VM was terribly slow, and would drive you crazy if you were using it for something real, but at least it allows you to get your hands dirty. If you have some more modern kit, with a compatible GPU and more memory to throw at it, it’s going to be a lot more useable. And remember, running a model takes a lot less kit than training a model. As you can see here, I’m able to run an existing model on junk kit.

I don’t know what I’m going to do with this going forward. Maybe nothing, but it was fun to play around with.

That’s it really. If you want to look at a bunch of output from varying models for the same simple question, you can see that below, but I won’t be offended if you skip the rest of the post. 🙂

And before you ask, I’m not trying to pivot into being an AI guru, but if Zuck wants to pay me $300M to play around with this stuff, I might consider it. 🙂

Cheers

Tim…

Basic Comparison

Some things to remember before we start.

  • The format of the output varies a lot when I am pasting it as text here, but typically it is formatted neatly on screen, usually like a markdown page, so ignore the formatting issues here.
  • The time it took to return an answer varied wildly. I’m not sure how much of that was due to the models, and how much of that was due to my laptop going on strike. As a result I ignored the timings and just looked at the output. With some better kit it would be nice to compare timings between models.
  • The runs of the local LLMs took so long I did very limited testing. 🙂

This was one of his questions.

“how do you join 2 lists in C#”

Claude Code is the cloud solution my colleague used, so it was pretty quick and gave him this.

In C#, there are several ways to join (concatenate) two lists:
1. Using Concat() (LINQ)
The most common and readable approach:
var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 4, 5, 6 };
var combined = list1.Concat(list2).ToList();
// Result: { 1, 2, 3, 4, 5, 6 }

2. Using AddRange()
Modifies the original list (no new list created):
var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 4, 5, 6 };
list1.AddRange(list2);
// list1 is now: { 1, 2, 3, 4, 5, 6 }

3. Using Collection Expression (C# 12+)
The newest and most concise syntax:
var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 4, 5, 6 };
var combined = [..list1, ..list2];
// Result: { 1, 2, 3, 4, 5, 6 }

4. Using Union() (removes duplicates)
If you want to avoid duplicate values:
var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 3, 4, 5 };
var combined = list1.Union(list2).ToList();
// Result: { 1, 2, 3, 4, 5 } - note: 3 appears only once

Quick Comparison
Method	       Creates New List	      Preserves Duplicates   Performance
Concat()       Yes                    Yes                    Good
AddRange()     No (modifies original) Yes                    Best
[..list1, ..list2] Yes	              Yes                    Good
Union()	       Yes                    No                     Slower
Choose based on your needs:
Need to preserve both original lists? Use Concat() or collection expressions [..]
Want to modify the first list? Use AddRange()
Need to remove duplicates? Use Union()

The llama3 model produced the following.

There are several ways to join two lists in C#. Here are a few examples:
Using the Concat method:
List<string> list1 = new List<string> { "a", "b" };
List<string> list2 = new List<string> { "c", "d" };
List<string> joinedList = list1.Concat(list2).ToList();
// joinedList now contains: "a", "b", "c", "d"
The Concat method takes two sequences (in this case, lists) and returns a new sequence that contains the elements from both lists.

Using the Union method:
List<string> list1 = new List<string> { "a", "b" };
List<string> list2 = new List<string> { "c", "d" };
List<string> joinedList = list1.Union(list2).ToList();
// joinedList now contains: "a", "b", "c", "d"
The Union method takes two sequences and returns a new sequence that contains all the unique elements from both lists.

Using LINQ:
List<string> list1 = new List<string> { "a", "b" };
List<string> list2 = new List<string> { "c", "d" };
var joinedList = (from e in list1 select e).Concat(from e in list2 select e).ToList();
// joinedList now contains: "a", "b", "c", "d"
This uses LINQ to concatenate the two lists.

Using a simple loop:
List<string> list1 = new List<string> { "a", "b" };
List<string> list2 = new List<string> { "c", "d" };
List<string> joinedList = new List<string>();
foreach (string s in list1)
{
    joinedList.Add(s);
}
foreach (string s in list2)
{
    joinedList.Add(s);
}
// joinedList now contains: "a", "b", "c", "d"
This method is straightforward, but less elegant than the other options.

Note that these examples assume you want to concatenate two lists of strings. If you have lists of different types or more complex structures, you may need to use a different approach.

The deepseek-r1 model produced the following.

To join two lists in C#, you can use the following methods depending on your needs:
 
1. **Concatenate Two Lists (Combine into One List)**
Use **`List<T>.AddRange()`** to merge two lists:

List<int> list1 = new List<int> { 1, 2 };
List<int> list2 = new List<int> { 3, 4 };
List<int> combined = new List<int>();
combined.AddRange(list1);   // Add elements from list1
combined.AddRange(list2);   // Add elements from list2
// Result: combined = [1, 2, 3, 4]

### 2. **Join Lists into a Single String (Element + Separator)**
Use **`string.Join()`** to concatenate elements with a separator:

List<int> list1 = new List<int> { 1, 2 };
List<int> list2 = new List<int> { 3, 4 };
List<int> combinedList = list1.Concat(list2).ToList(); // Combine into one list
string result = string.Join(", ", combinedList);
// Result: "1, 2, 3, 4"

3. **Combine Elements from Two Lists (Element-by-Element)**

Use **`Enumerable.Zip()`** to merge elements from two lists (requires same length):

List<int> listA = new List<int> { 1, 2 };
List<int> listB = new List<int> { 3, 4 };
var combined = listA.Zip(listB, (a, b) => $"{a}-{b}").ToList();
// Result: combined = ["1-3", "2-4"]

4. **Merge Lists with Distinct Elements**
Use **`Enumerable.Concat()`** + **`Distinct()`** to avoid duplicates:

List<int> list1 = new List<int> { 1, 2 };
List<int> list2 = new List<int> { 2, 3 };
var merged = list1.Concat(list2).Distinct().ToList();
// Result: merged = [1, 2, 3]

Key Notes:
- **`AddRange()`** is ideal for merging **`List<T>`** objects directly.
- **`string.Join()`** requires elements to be **`IEnumerable<T>`** (e.g., strings, numbers).
- **`Zip()`** works only if both lists have the same number of elements.
- **`Concat()`** creates an **`IEnumerable<T>`**; use `.ToList()` to convert to a `List<T>`.
 
Choose the method based on whether you need a combined list or a joined string.

The gpt-oss:20b model produced the following.

Joining two lists in C#
What you probably mean by “join” is concatenating (or appending) the elements of two lists into a single list.
(If you meant a relational join by key, that’s a different story – see the section at the bottom.) 

Below are the most common ways to do it, with code snippets and a few gotchas.

1. Using AddRange (in?place)

var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 4, 5, 6 };
list1.AddRange(list2);   // list1 now contains 1,2,3,4,5,6
// list2 stays unchanged

Pros
Very fast – no allocations beyond the destination list.
Modifies the original list, so you don’t need a temporary variable.

Cons
list1 will grow; if you need a new list you must create one first.

2. Using LINQ Concat

var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 4, 5, 6 };

var combined = list1.Concat(list2).ToList();   // new list

Pros
Concise and expressive.
Works with any IEnumerable<T> – not just List<T>.
You get a new list, leaving the originals untouched.

Cons
Allocates a new list ( ToList()) – may be overkill if you just want a quick append.
Slight overhead from LINQ plumbing (but negligible for typical list sizes).

3. Using List<T>.AddRange on a freshly?created list
If you want a new list and prefer the imperative style:

var combined = new List<int>(list1.Count + list2.Count);
combined.AddRange(list1);
combined.AddRange(list2);

This pre?allocates the correct capacity, so the list won’t have to grow as you add elements.

4. Using Enumerable.Append in a loop (rare)
If you only have a few items or need to do some processing per item:

var result = list1.Select(x => x);   // start with list1
foreach (var item in list2)
{
    result = result.Append(item);
}
var combined = result.ToList();

Generally not recommended unless you have a very specific reason.

5. Union vs. Concat
Concat: Keeps all items, including duplicates.
Union: Returns distinct items, preserving the order of the first list then the second.

var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 3, 4, 5 };
var union = list1.Union(list2).ToList();   // 1,2,3,4,5

If you only need to avoid duplicates, Union is the way to go.

6. “Join” by key (relational join
Sometimes people use the term join to mean combining two collections based on a common key, just like a SQL join. In C#, that’s usually done with LINQ’s Join, GroupJoin, or SelectMany. Here’s a quick example:

var customers = new[]
{
    new { Id = 1, Name = "Alice" },
    new { Id = 2, Name = "Bob" }
};

var orders = new[]
{
    new { Id = 101, CustomerId = 1, Amount = 50m },
    new { Id = 102, CustomerId = 1, Amount = 25m },
    new { Id = 103, CustomerId = 2, Amount = 75m }
};

var customerOrders = from c in customers
                     join o in orders on c.Id equals o.CustomerId
                     select new { c.Name, o.Id, o.Amount };

foreach (var co in customerOrders)
{
    Console.WriteLine($"{co.Name} ordered #{co.Id} for {co.Amount:C}");
}

Output:
Alice ordered #101 for $50.00
Alice ordered #102 for $25.00
Bob ordered #103 for $75.00
Inner join ( join … on … equals …) – only matching pairs.
Left outer join ( GroupJoin + SelectMany with DefaultIfEmpty) – keeps all left items.
Full outer join – requires a bit more plumbing (you typically need to union the two sides).
If you’re after a relational join, choose the appropriate LINQ method ( Join, GroupJoin, SelectMany, DefaultIfEmpty, etc.).

Quick Reference Table
Scenario
Method
Result
Append two lists into one new list
list1.Concat(list2).ToList()
New list, duplicates kept
Append in-place
list1.AddRange(list2)
list1 grows, list2 unchanged
Append without duplicates
list1.Union(list2).ToList()
New list, duplicates removed
Relational join by key
`list1.Join(list2

The codellama model produced the following.

There are several ways to join two lists in C#, depending on the specific context and requirements. Here are a few common methods:

1. Using the `Concat` method:

List<int> list1 = new List<int>() { 1, 2, 3 };
List<int> list2 = new List<int>() { 4, 5, 6 };
var joinedList = list1.Concat(list2).ToList();

This will concatenate the two lists and create a new `List<int>` object that contains all the elements from both lists.

2. Using the `Join` method:

List<int> list1 = new List<int>() { 1, 2, 3 };
List<int> list2 = new List<int>() { 4, 5, 6 };
var joinedList = list1.Join(list2, i => i, j => j, (i, j) => new { First = i, Second = j });

This will join the two lists based on a common property and create a new `IEnumerable<T>` object that contains all the elements from both lists, with each element containing the properties of the two lists.

3. Using the `Union` method:

List<int> list1 = new List<int>() { 1, 2, 3 };
List<int> list2 = new List<int>() { 4, 5, 6 };
var joinedList = list1.Union(list2).ToList();

This will combine the two lists and create a new `List<int>` object that contains all the unique elements from both lists.

4. Using the `Intersect` method:

List<int> list1 = new List<int>() { 1, 2, 3 };
List<int> list2 = new List<int>() { 4, 5, 6 };
var joinedList = list1.Intersect(list2).ToList();

This will find the common elements in both lists and create a new `List<int>` object that contains only those elements.

These are just a few examples of how you can join two lists in C#. There are many other ways to do this, depending on your specific needs and requirements.

The deepseek-coder model produced the following.

In C#, the Join method can be used for combining two collections into one string or another collection type that implements IEnumerable<T> (like List). Here is an example of how it works on a list. Let's say we have these following code snippets in your project and you want to join them:

1) Using the Join method from LINQ(Language Integrated Query): 

List<string> names = new List<string> { "John", "Alice" };
var joinedNames = string.Join(" ", names); // Outputs: John Alice
Console.WriteLine(joinedNames);

2) Using the Join method manually (Without LINQ): 
If you want to join two collections without using any Linq, then it'll be a bit more manual operation and requires knowledge of looping mechanism in C# like for-each loops or foreach. Here is an example:  

List<string> names1 = new List<string> { "John", "Alice" };  // First list   
List<string> names2=new List<string>  {"Bob","Charlie"} ;// Second list to join the first one.    
names1.AddRange(names2);  
Console.WriteLine("Joined Result: {0}", string.Join (" ", names1)); // Outputs : John Alice Bob Charlie 
  
In both cases, this operation will generate a new List that contains all items from the two lists (one after another), and then it'll join these elements with spaces between them using `string.Join` method which returns final joined string/collection of strings in case if you want to print out combined list as output for instance console application or web API controller response header content values, etc..

The codestral model produced the following.

You can join two lists in C# by using the `AddRange()` method of the `List<T>` class. This method adds a collection to the end of the existing list.

Here is an example:

// Declare and initialize the first list
List<string> list1 = new List<string> { "one", "two", "three" };

// Declare and initialize the second list
List<string> list2 = new List<string> { "four", "five", "six" };

// Use AddRange to join the two lists
list1.AddRange(list2);

After executing this code, `list1` will contain all elements from both `list1` and `list2`.

There was a lot of similarity between the output from the models, but there were some differences. Specifically, the only model that suggested a solution using a Collection Expression was Claude Code, which was interesting.

Fedora 43 and Oracle

Fedora 43 was released a few days ago. Let’s start with the standard warning.

Here are the usual suspects.

As explained in the first link, I just do this for fun, and to see what is coming to RHEL/OL in the future.

Vagrant & Packer

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 43 box. You can do that using Packer. There is an example of that here.

What’s New?

What’s new with Fedora 43 you ask? You can read about it here.

There are newer versions of most of the packages, GNOME is now Wayland-only and they moved on to RPM 6.0. If your just a regular user like me, you won’t notice much of a difference. 🙂

Cheers

Tim…

Fedora 42 and Oracle

Fedora 42 was released a few days ago. Let’s start with the standard warning.

Here are the usual suspects.

As explained in the first link, I just do this for fun, and to see what is coming to RHEL/OL in the future.

Vagrant & Packer

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 42 box. You can do that using Packer. There is an example of that here.

What’s New?

What’s new with Fedora 42 you ask? You can read about it here.

Of course they are playing up references to “The Hitchhikers Guide to the Universe”.

There are newer versions of most of the building blocks and there is a new installer for the workstation version of Fedora 42. I use the server version, so even a manual installation looks similar to the previous version. Maybe the next version will use the new installer for all flavours of the software.

Cheers

Tim…

Hugepages in 19c : It’s the same but completely different!

A few days ago I got a DM from Patrick Jolliffe suggesting I update my Hugepages article as things had changed in 19c. He also pointed out this blog post.

I looked at the 19c docs for the USE_LARGE_PAGES parameter and I was a little confused. In addition to the original values of TRUE, FALSE and ONLY, we now have AUTO and AUTO_ONLY. When I read the text it wasn’t clear to me how these differed from TRUE and ONLY, but that is where the other post came in. It included a demonstration of using the AUTO_ONLY parameter value which made things much clearer.

To cut a long story short, if you use the AUTO or AUTO_ONLY settings, Oracle will configure hugepages on the OS for you and use them. If it can’t configure enough hugepages, using AUTO will start the database with a mix of hugepages and regular pages. If you use AUTO_ONLY the database won’t start unless it can configure enough hugepages on the OS.

It certainly makes life much easier than having to manually configure hugepages on the OS. To quote Patrick, “you wonder why it wasn’t like that in the first place”. 🙂

I’m not sure if this was an original feature of 19c, or if it was added in a later release update. Either way I completely missed it.

I added a new section to the start of my hugepages article, as most of the rest of it is kind-of pointless now, except for historical purposes. 🙂

It seems I’m only 6 years late to the party. 🙂

Cheers

Tim…

Update: Stefan Koehler told me that AUTO was first introduced in 11.2.0.3, then went away, and came back in 19c. I’m even more late to the party. 🙂

Fedora 41 and Oracle

Fedora 41 was released a few days ago. Here comes the standard warning.

Here are the usual suspects.

This is just for fun, as explained in the first link.

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 40 box. You can do that using Packer. There is an example of that here.

What’s New?

So what’s new with Fedora 41? You can read about it here.

Probably the most noticeable change for me during this process was the move to DNF 5, and that was because when I tried to use “dnf groupinstall” it failed, and I had to switch to “dnf group install”. In addition some of the group names had changed. Not exactly drama, but… 🙂

Cheers

Tim…

When Podman refuses to stop containers

At work we’ve been using containers for a while. We started by running all our Oracle REST Data Services (ORDS) inside Docker containers running on Oracle Linux 7 (OL7), and over time we’ve introduced some more services based on Podman containers running on Oracle Linux 9 (OL9). This was all done with very little drama.

The majority of our containers were still using Docker on OL7, and OL7 is fast approaching end of life, so we’ve recently been through a process of migrating all of our Docker containers to Podman on OL9.

For the most part Docker and Podman are interchangeable, but the main thing I like about Podman is it runs in rootless mode. This gives me a warm and fuzzy feeling. 🙂

Having a larger number of Podman containers running we’ve noticed a number of odd occurrences, which all centre around Podman not being able to stop containers.

The problem

All of our deployments are automated. When we want to deploy a new version of an application, or upgrade the software stack running inside a container, it involves removing the container and redeploying it.

To remove a container it has to be stopped, directly or indirectly. The problem we’ve found is occasionally Podman seems unable to stop a container, with the container stuck with a status of “Stopping” so it can’t be removed, which in turn breaks our automation.

Running subsequent stops, kills or removes makes no difference. These all fail to remedy the situation.

podman stop my_container
podman kill my_container
podman rm -vf my_container

This is an intermittent issue, but we’ve seen it happen with number of different containers, not just our custom images, so I don’t think it is a problem with the underlying images.

What’s happening?

After much unhelpful Googling, which sent me down a bunch of pointless rabbit holes, I noticed the container processes weren’t stopping. Typically there is a process for the container, and separate processes for anything running inside it. For us there is usually a single Java process running inside the container. If I killed the processes related to the container, I could then stop it and things would go back to normal.

I did spot people saying this was caused by a problem when using the “–restart” flag in rootless mode. We used this flag with Docker, but even after removing it for Podman containers, the issue would still occur.

The (sticking plaster) solution

I ended up writing a script called “force_restart_container.sh” with the following contents.

#!/bin/bash
CONTAINER_NAME=$1
if [ -n "${CONTAINER_NAME}" ]; then
echo Container Name: ${CONTAINER_NAME}
else
echo "Container name must be specified."
exit 0
fi

# Get the main process for the container.
# Remember to remove the grep line, and the script process.
CONTAINER_PROCESS_ID=`ps -ef | grep -v grep | grep -v force_restart_container | grep ${CONTAINER_NAME} | awk '{print $2}'`
echo Container Process ID: ${CONTAINER_PROCESS_ID}

# Kill any processes containing the process ID.
# This kills the child processes too.
kill -9 `ps -ef | grep -v grep | grep ${CONTAINER_PROCESS_ID} | awk '{print $2}'`

# Stop the container, as Podman doesn't notice the processes are dead until you interact with the container.
echo "Stop container. Ignore errors."
podman stop ${CONTAINER_NAME}

echo "Start container."
podman start ${CONTAINER_NAME}

echo "Check status of container."
podman ps -a | grep ${CONTAINER_NAME} | grep -v grep

It’s bit clunky, but it gets the job done when called with the container name as a command line argument.

sh force_restart_container.sh my_container

Conclusion

I don’t like including voodoo solutions, so I’ve not replaced all stops and restarts with this type of script. Most of the time Podman works fine, but if it does go nuts, we can easily fix the offending container.

Hopefully this irregularity will be fixed in a future version of Podman. If you’re interested, this is our current version.

$ podman --version
podman version 4.9.4-rhel
$

We are running on OL9, but I’ve seen people report similar issues on RHEL8/OL8, so I don’t think the version of the OS is an issue.

Cheers

Tim…

PS. If anyone knows of a better solution than my crappy script, I’m all ears. 🙂

Fedora 40 and Oracle

Fedora 40 was released over a month ago. Here comes the standard warning.

Here are the usual suspects.

I like messing about with this stuff, as explained in the first link.

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 40 box. You can do that using Packer. There is an example of that here.

What’s New?

So what’s new with Fedora 40? You can read about it here.

Cheers

Tim…

Fedora 39 and Oracle

Fedora 39 was released recently. Here comes the standard warning.

Here are the usual suspects.

I like messing about with this stuff, as explained in the first link.

I pushed Vagrant builds to my GitHub.

If you want to try these you will need to build a Fedora 39 box. You can do that using Packer. There is an example of that here.

What’s New?

So what’s new with Fedora 39? You can read about it here.

Cheers

Tim…

Using a scratchpad…

Followers of the blog know I’m a big advocate for writing things down. The main reason I do this is because I want a record of everything I do.

I rarely type a command directly into the command line. I nearly always type it in a scratchpad first. Currently I have 67,250 lines in my work scratchpad and 12,309 lines in my personal scratchpad.

When I say scratchpad, I just mean a text file, which I edit using a text editor. Nothing fancy.

Why do I do this?

Inspiration

Most of my articles and blog posts start life as notes in my personal scratchpad. At work some of my scratchpad notes become more formal documentation, like knowledge base notes and how-to files in Git etc.

I know if I don’t make the notes as I go along, I will forget what I did, and struggle to write the documentation later.

If something makes it as far as being written up, it gets removed from my scratchpads, so what’s in there at the moment are notes that have not made the cut, so to speak. 🙂

One of the reasons I’ve been able to produce content for so many years is there is a constant stream of stuff added to my scratchpads. Of course, some of it is junk, but some of it is not.

If you are struggling with documentation or inspiration, I think taking this approach will really help.

Reflection

One of the things that I find really useful about taking notes is it allows me to look back and reflect on what I did to complete something. For example I might search through my scratchpad to see what happened over the lifetime of a server. I can see all tickets that were raised and what firewall rules and configuration changes were required. When I get a similar request this allows me to estimate the amount of work that needs to be done, and I can see what teams will be involved in the process.

I could search though our ticketing system for much of this information, but I find it a lot easier to keep a record of my actions in a scratchpad, then drill into the tickets if I need more info, which I rarely do.

Rewrites

Much like my articles, if I read back through some notes and they aren’t 100% clear, I often rewrite them. Maybe adding some more text, or a clearer example. This process may result in something graduating into being a separate document, but sometimes it just stays in the scratchpad forever.

Give it a go

If you don’t already do this, give it a go and see how you feel about it. Especially you content creators.

Cheers

Tim…

When Overlapping CRON Jobs Attack”¦

We recently had an issue, which I suspect was caused by overlapping CRON jobs. By that I mean a CRON job had not completed its run by the time it was scheduled to run again.

CRON

If you’ve used UNIX/Linux you’ve probably scheduled a task using CRON. We’ve got loads of CRON jobs on some of our systems. The problem with CRON is it doesn’t care about overlapping jobs. If you schedule something to run every 10 minutes, but the task takes 30 minutes to complete, you will get overlapping runs. In some situations this can degrade performance to the point where each run gets progressively longer, meaning there are more and more overlaps. Eventually things can go bang!

Fortunately there is a really easy solution to this. Just use “flock”.

Let’s say we have a job that runs every 10 minutes.

*/10 * * * * /u01/scripts/my_job.sh > /dev/null 2>&1

We can use flock protect it by providing a lock file. The job can only run if it can lock the file.

*/10 * * * * /usr/bin/flock -n /tmp/my_job.lockfile /u01/scripts/my_job.sh > /dev/null 2>&1

In one simple move we have prevented overlapping jobs.

Remember, each job will need a separate lock file. In the following example we have three separate scripts, so we need three separate lock files.

*/10 * * * * /usr/bin/flock -n /tmp/my_job1.lockfile /u01/scripts/my_job1.sh > /dev/null 2>&1
*/10 * * * * /usr/bin/flock -n /tmp/my_job2.lockfile /u01/scripts/my_job2.sh > /dev/null 2>&1
*/10 * * * * /usr/bin/flock -n /tmp/my_job3.lockfile /u01/scripts/my_job3.sh > /dev/null 2>&1

Oracle Scheduler (DBMS_SCHEDULER)

The Oracle Scheduler (DBMS_SCHEDULER) doesn’t suffer from overlapping jobs. The previous run must be complete before the next run can happen. If we have a really slow bit of code that takes 30 minutes to run, it is safe to schedule it to run every 10 minutes, even though it may seem a little stupid.

begin
  dbms_scheduler.create_job (
    job_name        => 'slow_job',
    job_type        => 'plsql_block',
    job_action      => 'begin my_30_min_procedure; end;',
    start_date      => systimestamp,
    repeat_interval => 'freq=minutely; interval=10; bysecond=0;',
    enabled         => true);
end;
/

The Oracle Scheduler also has a bunch of other features that CRON doesn’t have. See here.

Conclusion

I’m not a massive fan of CRON. For many database tasks I think the Oracle Scheduler is far superior. If you are going to use CRON, please use it safely. 🙂

Cheers

Tim…