Recently, I set up a dedicated Ubuntu LTS server to experiment with self-hosting AI tools and strengthening my Linux skills.

I installed Ubuntu Server LTS, choosing the server edition because:

  • It’s lightweight (no GUI overhead)
  • More secure by default
  • Ideal for long-running services
  • Better for infrastructure-style setups

Securing Remote Access with SSH

After installation, I configured secure SSH access.

SSH Key Authentication

Instead of relying on passwords, I:

  • Generated an SSH key pair
  • Added my public key to the server
  • Disabled password authentication

This means:

  • The server only accepts trusted keys
  • Brute-force password attacks won’t work
  • Much stronger security posture

Hardened SSH Config

I reviewed and modified:

  • PasswordAuthentication
  • KbdInteractiveAuthentication
  • UsePAM
  • AuthenticationMethods

I also checked for conflicting config files inside:

/etc/ssh/sshd_config.d/

This taught me something important:
Linux configuration is layered. A lower file can override your main config.
That was a real troubleshooting moment.

User & Permission Management

I created a dedicated user (not root) for running services.

When checking:

uid=1002(ai) gid=1002(ai) groups=1002(ai),27(sudo),100(users)

I learned:

  • uid = user ID
  • gid = primary group
  • 27(sudo) = user can elevate privileges
  • 100(users) = standard user group

This helped me understand Linux permission structure.

Firewall & Network Awareness

I began thinking in terms of:

  • Localhost vs LAN exposure
  • Port binding (e.g. 127.0.0.1:11434)
  • Limiting services to internal-only access
  • Minimising attack surface

This is where I shifted from “just installing things” to actually thinking like a sysadmin (sassy I know)

Running OpenClaw & Ollama

I installed and experimented with:

  • OpenClaw (gateway service)
  • Ollama (local LLM runner)
  • Containerized services
  • Systemd service management

Example container command:

podman run -d \
--name ollama \
-p 127.0.0.1:11434:11434 \
--memory=8g \
--cpus=4 \
--security-opt=no-new-privileges \
docker.io/ollama/ollama

Key things I learned:

  • Binding to 127.0.0.1 prevents public exposure
  • Memory and CPU limits protect the host
  • no-new-privileges improves container security
  • Services can be controlled via systemctl

I also debugged:

  • Config validation errors
  • Provider configuration structure
  • Port conflicts

What This Actually Gave Me

This wasn’t just “installing tools”.

I learned:

  • How Linux services start and run
  • How SSH authentication really works
  • How config files override each other
  • How containers isolate applications
  • How to think about security first
  • How to debug using logs and CLI tools

Most importantly:
This thought me how to operate the system.


(And you need a lot more ram then you think, to run AI)