Tuesday, June 24, 2025

QNAP and Rancher, A Match Made in Permissions Hell

 


I've been running some services that utilize AI Agents and wanted to request history for the agents to use and while there are several types of storage to use, PostgreSQL is one of the most recommended.   Since I'm only using it to provide limited history I didn't want to run a full VM to host the server and hosting PostgreSQL storage over NFS is frowned upon in this establishment.    We required non-network filesystem storage.  (yes, I get the irony here)

So, I wanted to provide an ext4 / xfs persistent volume for my PostgreSQL server, but didn't want to just do writethrough to the physical host which I don't backup.  I do on the other hand backup my NAS and since I do that.  Seems like a good location to storage my pgdata.

My primary stack (at this moment anyhow) is Proxmox running Rocky Linux VMs running a SUSE Rancher Kubernetes cluster utilizing a QNAP NAS for storage.

To utilize the QNAP, there is a QNAP CSI Driver for Kubernetes that seem to fit the bill.  I went through the process of installing the plugin, but alas.  It wouldn't start.  It presented several errors and I would work through them only to end up back at the first error and there was no Kubernetes "Service" created as the pods failed to start.

The issue ended up being focused around Rancher permissions preventing the pods from starting.

time="2025-06-23T16:04:05Z" level=error msg="error syncing 'trident': error installing Trident using CR 'trident' in namespace 'trident'; err: reconcile failed; failed to patch Trident installation namespace trident; admission webhook \"rancher.cattle.io.namespaces\" denied the request: Unauthorized, requeuing"

 Following that error, it then provided the following two errors.

level=info msg="No Trident deployments found by label." label="app=controller.csi.trident.qnap.io" namespace=trident

and...

 level=info msg="No Trident daemonsets found by label." label="app=node.csi.trident.qnap.io" namespace=trident

While a tried a few things to resolve this, I did end up going to the developers and it seems someone just previous to me had the same issue and was able to use a workaround to resolve it by running the following command:

kubectl label namespaces trident pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged

This labelling of the namespace effectively resolved the issue and allowed the plugin / service to be installed.

 

Tuesday, June 17, 2025

ChatGPT In Your Discord Server using n8n

 



So, everyone is going stir-crazy over these AI Agents like ChatGPT, Google Gemini, and Claude.  I figured as a project and learning experience.  I would write a Discord bot in Python and use webhooks to allow it to talk to my n8n deployment to interact with ChatGPT and respond to the Discord channel

Well, I have successfully done it, but it's not perfect.   The main issue is, depending on what you ask it.  ChatGPT will respond with a large swath of text and Discord limits you to 2,000 characters (4,000 if you have Nitro upgrade)

When this happens, the Discord node that talks to the Discord webhook just fails when it tries to send responses that are larger than the current limit.    I suspect I can just have ChatGPT respond to a Python or JavaScript code which can divide the response up into multiple messages.  Though I haven't went through that process yet.

The current process is I created a text chatroom in Discord called Oracle.  I created a Python Discord bot that joins the server and monitors that channel.  It ignores any messages that aren't directed at the bot. 

ie: "@OracleBot [Question for ChatGPT]"    (mine is not named OracleBot, but I digress)

The bot then takes that message and strips of his name and trims the text then sends that message to my n8n server where I have a workflow that accepts messages from a webhook.

That webhook forwards the message to an AI Agent which has ChatGPT (and some other tools that accesses data that I have) connected to it to resolve questions.

Once the question has been resolved, the AI Agent sends the response to a Discord node that then sends the response directly to the channel where the questions was asked. 

Here is a flowchat of the happenings:



So, while the Discord bot sends the message to n8n, it does not actually respond to the bot itself.  It sends it to a Discord webhook that injects the message into the chatroom as if it was coming from the bot.

n8n flow for a Discord ChatGPT Bot

At some point, I will provide my Python Discord bot, but it's a hack job at this point and I want to clean it up and possibly add some nice features to it.    Once I do that, I can update this post with that code.




Upgrading Kubernetes via Rancher UI Completes Incomplete

 

At work and at home I primarily run on-prem Kubernetes with k3s and utilize SUSE Rancher UI.  These two tools make for a nice combination for running Kubernetes.

While Rancher is certainly nice for managing the cluster, I tend to do most of my deployments from the cli with kubectl.

Anyhow, I was having issues with my clusters when I would upgrade Kubernetes.   SUSE suggests upgrading your cluster via the Rancher UI for upgrades.   This has always been problematic for me as it would upgrade one node, but none of the others.

ie after triggering Rancher to upgrade Kubernetes I get...

 

NAME                                STATUS        ROLES                  AGE    VERSION

mynode1.mydomain.com     Ready    control-plane,master   290d   v1.32.5+k3s1

mynode2.mydomain.com     Ready    control-plane,master   289d   v1.31.9+k3s1

mynode3.mydomain.com     Ready    control-plane,master   289d   v1.31.9+k3s1

 So today this blog post is about how to correct this half-hearted upgrade.   The important thing is you must remember the parameters you used to install your cluster with in the first place. (take note!)

While it depends if you're installing the first node vs secondary master nodes or worker nodes.  It will look something like this:

curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable K3S_URL=[RANCHER_URL] K3S_TOKEN=[TOKEN] sh -  

Ensure you keep a copy of whatever your install configuration was and you can use it to upgrade your nodes at a later date.   In my case, I just ran the install command again on each of the nodes and rectified the issue.

NAME                                STATUS        ROLES                  AGE    VERSION

mynode1.mydomain.com     Ready    control-plane,master   290d   v1.32.5+k3s1

mynode2.mydomain.com     Ready    control-plane,master   289d   v1.32.5+k3s1

mynode3.mydomain.com     Ready    control-plane,master   289d   v1.32.5+k3s1

You can easily setup Ansible to perform these updates to make them more simple to perform.  Especially of you have a large cluster or clusters.

 Hopefully that helps someone who landed in my boat.

Saturday, May 24, 2025

The AI Search Conundrum


So, I've been thinking about the situation where AI Search is replacing normal search.   The interesting thing is, AI search and a normal search work in similar ways, the main difference is normal search just references data while AI search uses that same data, but has more predictive qualities about it.   

The great thing about this is rather than typing in your searches to match references that the search engine knows about.   With AI and specifically Large Language Models (LLMs), you can "ask" about a subject with specific details and AI can respond coherently and even predictively provide more related information that you may not even realized you wanted.

This is a huge step forward.  I relate it similar to the job from my childhood learning by paging through our Encyclopedias to using search engines of today.   

One of the main issues AI brings is the way it changes our lives.   For many, it's not just for the better, but in their eyes;  for the worst.    For instance, many people are being affected by AI where it can hurt then the most.    By taking their income pipeline away from them.

Artist were one of the first to be affected by this.  Understandably, they try to stop the AI invasion into their livelihood.  The problem is, that is never going to work.   Just like the Recording Industry Association of America (RIAA) tried and failed to stop the proliferation of MP3s.   You can try to fight it all you want, but it's already here and it's not going away.

While it can feel like doomsday for those already impacted by this technology.  I will try to assure you, that it's not very likely.   Yes, this paradigm shift will change our lives, but we as humans will adapt and I think it's a far better plan to shift our efforts away from fighting the inevitable and focus more on what to do tomorrow to adapt.

These types of things have happened many times already and the people adapted to those changes.  The  invention of the steam engine kick started what became the industrial revolution.    This destroyed many jobs, but on the flip side.  It created brand new jobs.

The computer automated data processing eliminating tons of jobs, but new jobs came form it and the Internet and E-Commerce destroyed many retail jobs, but new jobs were created by it. 

So, here it comes again.  Many of the jobs and businesses created by the Internet and going to be destroyed by AI.  Stack Exchange is all but dead from what I hear.   AI consumed all of Stack Exchange and now AI provides that information to the search users and they never even had to click the link to visit Stack Exchange.   

Stack Exchange's revenue falls off a cliff and then the inevitable will happen.  It along with hundreds, thousands, or even millions of websites will vanish.   The consolidation of wealth from the Internet will be focused on a few companies who possess these AI that have consumed the world's information.

The Conundrum.

These main reason these LLMs have the ability to do this is due to the sheer wealth of information on the Internet.  Now, ask yourself.   How did that information appear on the Internet?   It appears by having millions upon millions people all working on different things, asking question, creating information and content to be shared.   

If these AI searches choke off the income of these content creators, then they will stop creating content.   If the content stops, then so does the LLMs ability to continue learning at the same pace.  All you have to do is look back ten years and you can see how the world has vastly changed.  While LLM can continue to learn with companies like OpenAI, Google, Meta, Microsoft, and all the others feeding them information.  That information will no longer be freely available.   

We all hear how expensive these LLMs are to train. Billions and billions of dollars are spent on AU chips and energy for processing data.  Soon, it will be billions and billions of dollars just for them to obtain that data to train their AI on.

These AI search engines are cannibalising their future revenue stream.  The people creating the data they use to answer your questions.

A lot of people who work on the Internet will be forced to find new jobs.   What those jobs will be, I do not know yet.  If history tells us anything it's that history repeats itself.  There will be a new avenue of employment, but we don't so much know what it will be yet.

I expect many websites will close their open doors and hide behind paywalls to prevent AI from ingesting their content and create a subscription revenue stream.  As that freely available data stream dries up.   These large AI companies will find away to pillage that data anyhow and I'm sure lawsuits will fly.   They won't matter much because "money talks" and the big AI companies will use that to get what they want.  (Supreme Court Justices taking money from Billionaires or Billionaires trying to buy Elections anyone?)

I know this is a scary time for people.  The upheaval can be harsh.   The only thing I can recommend is to keep a keen eye on the future and be one of the first to enter whatever new job / business market that appears.   I entered the computer revolution right at the beginning and made a great career of it.   Even my job is now changing and I must either out-pace AI or take my own advice and try to be first in the next great emerging job market.

It will be interesting to see how this blog post ages.  We all knew AI was coming, we just didn't know when exactly it would arrive.

Monday, August 13, 2012

Fixing the charging issue for the Nexus S

I don't have an Nexus S phone as I have a Galaxy Nexus.  Someone I know came to me with their Nexus S telling me that when they plug it in, it no longer charges.   I haven't seen this issue before, so I took to Google to find the answer.

Anyway, I came upon a post by MrAwesomeNL that solved the issue and figured I would share it here as it seems quite a few people have had this issue.  It also appears that it isn't just the Nexus S having the issue.

The Fix

  1. Unplug the phone.
  2. Remove the battery and SIM card
  3. Press the power button for 10 seconds.
  4. Replace the SIM and battery.
  5. Plug the phone in, but don't turn it on.  Within a few seconds, you should see the charging symbol appear on the screen.
  6. Once the symbol appears, turn the phone on while still plugged in.  Once booted up, you should see the charging icon on the battery.
  7. Let the phone charge!

I hope this helps and special thanks for MrAwesomeNL for sharing this fix.

Saturday, August 11, 2012

Linux on the Samsung Series 9 NP900X4C-A03US

I just purchased a Samsung Series 9 laptop model NP900X4C-A03US with the intentions of installing Linux on it.  Since I Couldn't find any thing on Google with someone installing Linux with this exact laptop, I figured I would share my experience with it along with a review of the overall laptop.

Briefly,  the Samsung Series 9 model NP900X4C-A03US is a 15" Ultrabook laptop.  It is extremely thin at about a half an inch (1.3cm) thick when closed and only weights about 3.6lbs.   (1.63kg)   It comes with a 1.9Ghz Core i7 quad-core processor, 8Gb of memory, 256Gb SSD hard drive, and Windows 7 64bit Professional.

The laptop comes with three USB ports.  Two on the right side are USB 3.0 and one on the left said that is USB 2.0.  This is a big deal for me as I like to carry a optical mouse with me and I also have an external DVD burner (LITEON model eNAU708) that requires two USB ports to power it.   The laptop does not have a Ethernet port, but does have a USB to Ethernet adapter included in the box with the laptop.  It also supports not just B/G/N Wifi,  but N speeds at both 2.4Ghz and 5Ghz.  If you want to see all the laptop specs, you check it out here.

With the SSD hard drive, it took about 8 seconds to boot Windows 7 Professional.  It takes about 6 seconds to boot Ubuntu 12.04.  So far, I'm very happy with the laptop.  I hope with a few fixes, it will be even better!

On to the Linux stuff.

Linux

As a Linux Admin, my preferred distribution on the server are Redhat or Redhat clones.  I first booted Fedora 17 live cd.  Everything seem to work except the Wifi.  Rather than going ahead and installing it and trying to get the Wifi working, I booted a Ubuntu 12.04 live cd and everything worked out immediately. (mostly, I will get to that)  Since I'm also a Ubuntu fan for desktops, I was fine with installing Ubuntu so that is what I installed.

Installing Ubuntu 12.04

Since it comes with Windows 7 Professional with recovery partitions, I didn't want to lose them.  Especially if I couldn't get a distribution properly functioning on it.  First I re-sized the Windows partition and tried to install Ubuntu, but due to how the partitions were setup, the Ubuntu wouldn't allow the install on the free space.  I had no choice, but to re-partition the disk which would have blew out the Samsung / Windows partitions.  So, I rebooted the live cd, connected a USB hard drive and used "dd" to clone the SSD drive to a 1TB drive I keep for backups.  

After cloning the drive, I wiped the partitions out created new partitions though leaving 20% of the disk space free for the SSD drive to use for caching allowing faster operation.

The install went flawlessly.

Now, there are a few things that don't function correctly, but so far they haven't been that big of a deal. I have also haven't yet tried to get them working.  I will update this with the outcome once I do.

What doesn't work

Touchpad

The touchpad works fine.  What doesn't work is right clicking with the touchpad.  So far, I've only came across this issue once.  As I noted earlier, I carry an optical mouse with me and use it most of the time.

*Edit* I have located a fix for right clicking with the touch pad.  Special thanks to b16a2smith over at ubuntuforums.org.

 sudo su
 echo options psmouse proto=exps > /etc/modprobe.d/psmouse.modprobe
 reboot


Backlit Keyboard

So far, I haven't gotten the backlit keyboard lights to light up using the function key (Fn) and F9 / F10.

Screen Brightness Control

Again, using the Fn key and F2 / F3 to control screen brightness doesn't really work.  It can cause the screen to flicker, so don't keep tapping them or it will.  After it finishes, it will return to normal though.

Fan Control

Using the Fn key and F11 doesn't seem to do anything in controlling the fan.

What Does Work

Sound

 Sound works perfectly.  The Fn key with F6 (mute) / F7 (sound down) / F8 (sound up) work perfectly also.

WebCam

While I haven't used to extensively, I was able to open up a Google+ Hangout and video chit-chat with a buddy of mine.  

Just About Everything Else

Just about everything else that I have tested works fine.

What I haven't yet tested

  • The HDMI port
  • The microUSB Network adapter

Possible Fixes

There maybe a fix for the backlit issue, but I'm not sure if it will work with this model yet as I haven't tried it.  Special thanks to John Slade and his post about his Series 9 laptop for the link to samsung-laptop-dkms.

Wednesday, February 9, 2011

Hanging at Verifying DMI Pool Data...

I'm checking out NexentaStor for possible use in our operations. Currently I'm just setting up the a trial of the Enterprise version with HA fail-over and replication. I've got to whitebox servers to test on.

The setup is two whitebox servers made of the following:
  • Gigabyte GA-MA78LMT-S2 (bios vs 3.2)
  • AMD Athlon II X4 645 Propus @ 3.1Ghz
  • 4GB (2x2) G.Skill memory
  • LSI SAS3442E-R Raid card (don't like, but does work)
  • 4x Seagate Constellation ES 2TB drives. (ST32000644NS)
  • Intel EXPI930CTBLK GB nic. (as the primary and the onboard nic for replication)

I setup both, but had a UPS fall over and one of the servers wouldn't boot back up afterwards. I was getting stuck at Verifying DMI Pool Data... Genereally resetting the CMOS would fix this, but it didn't. (this is a good page for those running into this error) Usually it's the CMOS corrupted, or something in the way of the boot process. After a while I figured the master boot record was messed up, so I reinstalled NexentaStor. Well, that didn't work either and after a few hours of messing around with it, I finally fixed it. I force wiped the entire hard drives. It seems the boot sector was messed up and even reinstalling NexentaStor wasn't fixing it. While I have a raid device, I have it setup in JBOD and am going to use ZFS to manage the raid.

To clear off the entire drive, I just went into the LSI bios and told it to create and then destroy the raid. After doing that, I reinstalled and the system booted again.

Update:
It appears that there is an issue with the LSI card and NexentaStor. After reinstalling, I reboot and get the same issue. So sometimes it will boot, but once it doesn't. It's over. I think I have a choice to install a couple of drives on the MB SATA controller for the OS, or just not use NexentaStor.

I'm going to reinstall the HA setup for Openfiler for now. I've set one of these up before. While it does well, (though ZFS of BtrFS would make it 100x better) it's not polished. Actually the UI is quite buggy. I just need something that will boot for this location. I will get supported hardware for the co-locations if I go NexentaStor.