Tuesday, June 24, 2025

QNAP and Rancher, A Match Made in Permissions Hell

 


I've been running some services that utilize AI Agents and wanted to request history for the agents to use and while there are several types of storage to use, PostgreSQL is one of the most recommended.   Since I'm only using it to provide limited history I didn't want to run a full VM to host the server and hosting PostgreSQL storage over NFS is frowned upon in this establishment.    We required non-network filesystem storage.  (yes, I get the irony here)

So, I wanted to provide an ext4 / xfs persistent volume for my PostgreSQL server, but didn't want to just do writethrough to the physical host which I don't backup.  I do on the other hand backup my NAS and since I do that.  Seems like a good location to storage my pgdata.

My primary stack (at this moment anyhow) is Proxmox running Rocky Linux VMs running a SUSE Rancher Kubernetes cluster utilizing a QNAP NAS for storage.

To utilize the QNAP, there is a QNAP CSI Driver for Kubernetes that seem to fit the bill.  I went through the process of installing the plugin, but alas.  It wouldn't start.  It presented several errors and I would work through them only to end up back at the first error and there was no Kubernetes "Service" created as the pods failed to start.

The issue ended up being focused around Rancher permissions preventing the pods from starting.

time="2025-06-23T16:04:05Z" level=error msg="error syncing 'trident': error installing Trident using CR 'trident' in namespace 'trident'; err: reconcile failed; failed to patch Trident installation namespace trident; admission webhook \"rancher.cattle.io.namespaces\" denied the request: Unauthorized, requeuing"

 Following that error, it then provided the following two errors.

level=info msg="No Trident deployments found by label." label="app=controller.csi.trident.qnap.io" namespace=trident

and...

 level=info msg="No Trident daemonsets found by label." label="app=node.csi.trident.qnap.io" namespace=trident

While a tried a few things to resolve this, I did end up going to the developers and it seems someone just previous to me had the same issue and was able to use a workaround to resolve it by running the following command:

kubectl label namespaces trident pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged

This labelling of the namespace effectively resolved the issue and allowed the plugin / service to be installed.

 

Tuesday, June 17, 2025

ChatGPT In Your Discord Server using n8n

 



So, everyone is going stir-crazy over these AI Agents like ChatGPT, Google Gemini, and Claude.  I figured as a project and learning experience.  I would write a Discord bot in Python and use webhooks to allow it to talk to my n8n deployment to interact with ChatGPT and respond to the Discord channel

Well, I have successfully done it, but it's not perfect.   The main issue is, depending on what you ask it.  ChatGPT will respond with a large swath of text and Discord limits you to 2,000 characters (4,000 if you have Nitro upgrade)

When this happens, the Discord node that talks to the Discord webhook just fails when it tries to send responses that are larger than the current limit.    I suspect I can just have ChatGPT respond to a Python or JavaScript code which can divide the response up into multiple messages.  Though I haven't went through that process yet.

The current process is I created a text chatroom in Discord called Oracle.  I created a Python Discord bot that joins the server and monitors that channel.  It ignores any messages that aren't directed at the bot. 

ie: "@OracleBot [Question for ChatGPT]"    (mine is not named OracleBot, but I digress)

The bot then takes that message and strips of his name and trims the text then sends that message to my n8n server where I have a workflow that accepts messages from a webhook.

That webhook forwards the message to an AI Agent which has ChatGPT (and some other tools that accesses data that I have) connected to it to resolve questions.

Once the question has been resolved, the AI Agent sends the response to a Discord node that then sends the response directly to the channel where the questions was asked. 

Here is a flowchat of the happenings:



So, while the Discord bot sends the message to n8n, it does not actually respond to the bot itself.  It sends it to a Discord webhook that injects the message into the chatroom as if it was coming from the bot.

n8n flow for a Discord ChatGPT Bot

At some point, I will provide my Python Discord bot, but it's a hack job at this point and I want to clean it up and possibly add some nice features to it.    Once I do that, I can update this post with that code.




Upgrading Kubernetes via Rancher UI Completes Incomplete

 

At work and at home I primarily run on-prem Kubernetes with k3s and utilize SUSE Rancher UI.  These two tools make for a nice combination for running Kubernetes.

While Rancher is certainly nice for managing the cluster, I tend to do most of my deployments from the cli with kubectl.

Anyhow, I was having issues with my clusters when I would upgrade Kubernetes.   SUSE suggests upgrading your cluster via the Rancher UI for upgrades.   This has always been problematic for me as it would upgrade one node, but none of the others.

ie after triggering Rancher to upgrade Kubernetes I get...

 

NAME                                STATUS        ROLES                  AGE    VERSION

mynode1.mydomain.com     Ready    control-plane,master   290d   v1.32.5+k3s1

mynode2.mydomain.com     Ready    control-plane,master   289d   v1.31.9+k3s1

mynode3.mydomain.com     Ready    control-plane,master   289d   v1.31.9+k3s1

 So today this blog post is about how to correct this half-hearted upgrade.   The important thing is you must remember the parameters you used to install your cluster with in the first place. (take note!)

While it depends if you're installing the first node vs secondary master nodes or worker nodes.  It will look something like this:

curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable K3S_URL=[RANCHER_URL] K3S_TOKEN=[TOKEN] sh -  

Ensure you keep a copy of whatever your install configuration was and you can use it to upgrade your nodes at a later date.   In my case, I just ran the install command again on each of the nodes and rectified the issue.

NAME                                STATUS        ROLES                  AGE    VERSION

mynode1.mydomain.com     Ready    control-plane,master   290d   v1.32.5+k3s1

mynode2.mydomain.com     Ready    control-plane,master   289d   v1.32.5+k3s1

mynode3.mydomain.com     Ready    control-plane,master   289d   v1.32.5+k3s1

You can easily setup Ansible to perform these updates to make them more simple to perform.  Especially of you have a large cluster or clusters.

 Hopefully that helps someone who landed in my boat.

Saturday, May 24, 2025

The AI Search Conundrum


So, I've been thinking about the situation where AI Search is replacing normal search.   The interesting thing is, AI search and a normal search work in similar ways, the main difference is normal search just references data while AI search uses that same data, but has more predictive qualities about it.   

The great thing about this is rather than typing in your searches to match references that the search engine knows about.   With AI and specifically Large Language Models (LLMs), you can "ask" about a subject with specific details and AI can respond coherently and even predictively provide more related information that you may not even realized you wanted.

This is a huge step forward.  I relate it similar to the job from my childhood learning by paging through our Encyclopedias to using search engines of today.   

One of the main issues AI brings is the way it changes our lives.   For many, it's not just for the better, but in their eyes;  for the worst.    For instance, many people are being affected by AI where it can hurt then the most.    By taking their income pipeline away from them.

Artist were one of the first to be affected by this.  Understandably, they try to stop the AI invasion into their livelihood.  The problem is, that is never going to work.   Just like the Recording Industry Association of America (RIAA) tried and failed to stop the proliferation of MP3s.   You can try to fight it all you want, but it's already here and it's not going away.

While it can feel like doomsday for those already impacted by this technology.  I will try to assure you, that it's not very likely.   Yes, this paradigm shift will change our lives, but we as humans will adapt and I think it's a far better plan to shift our efforts away from fighting the inevitable and focus more on what to do tomorrow to adapt.

These types of things have happened many times already and the people adapted to those changes.  The  invention of the steam engine kick started what became the industrial revolution.    This destroyed many jobs, but on the flip side.  It created brand new jobs.

The computer automated data processing eliminating tons of jobs, but new jobs came form it and the Internet and E-Commerce destroyed many retail jobs, but new jobs were created by it. 

So, here it comes again.  Many of the jobs and businesses created by the Internet and going to be destroyed by AI.  Stack Exchange is all but dead from what I hear.   AI consumed all of Stack Exchange and now AI provides that information to the search users and they never even had to click the link to visit Stack Exchange.   

Stack Exchange's revenue falls off a cliff and then the inevitable will happen.  It along with hundreds, thousands, or even millions of websites will vanish.   The consolidation of wealth from the Internet will be focused on a few companies who possess these AI that have consumed the world's information.

The Conundrum.

These main reason these LLMs have the ability to do this is due to the sheer wealth of information on the Internet.  Now, ask yourself.   How did that information appear on the Internet?   It appears by having millions upon millions people all working on different things, asking question, creating information and content to be shared.   

If these AI searches choke off the income of these content creators, then they will stop creating content.   If the content stops, then so does the LLMs ability to continue learning at the same pace.  All you have to do is look back ten years and you can see how the world has vastly changed.  While LLM can continue to learn with companies like OpenAI, Google, Meta, Microsoft, and all the others feeding them information.  That information will no longer be freely available.   

We all hear how expensive these LLMs are to train. Billions and billions of dollars are spent on AU chips and energy for processing data.  Soon, it will be billions and billions of dollars just for them to obtain that data to train their AI on.

These AI search engines are cannibalising their future revenue stream.  The people creating the data they use to answer your questions.

A lot of people who work on the Internet will be forced to find new jobs.   What those jobs will be, I do not know yet.  If history tells us anything it's that history repeats itself.  There will be a new avenue of employment, but we don't so much know what it will be yet.

I expect many websites will close their open doors and hide behind paywalls to prevent AI from ingesting their content and create a subscription revenue stream.  As that freely available data stream dries up.   These large AI companies will find away to pillage that data anyhow and I'm sure lawsuits will fly.   They won't matter much because "money talks" and the big AI companies will use that to get what they want.  (Supreme Court Justices taking money from Billionaires or Billionaires trying to buy Elections anyone?)

I know this is a scary time for people.  The upheaval can be harsh.   The only thing I can recommend is to keep a keen eye on the future and be one of the first to enter whatever new job / business market that appears.   I entered the computer revolution right at the beginning and made a great career of it.   Even my job is now changing and I must either out-pace AI or take my own advice and try to be first in the next great emerging job market.

It will be interesting to see how this blog post ages.  We all knew AI was coming, we just didn't know when exactly it would arrive.