CategoryTechnology

Deploying Known on a docker stack

To migrate my php applications that did not have a handy docker-compose available, I needed a vanilla setup for my stack-based docker environment. Known is one of those apps and so the first step was to build that PHP environment.

Setting up a Dockerhub repository and building a custom PHP image

[If you decide to use my image, you would not have to build this yourself. Skip ahead to the Deploying Known section]

Most php based apps also need specific extensions built-in. While I am sure they’re out there on the Docker hub repository, I was not able to easily find any images that met my specific needs . So I went ahead and setup my own repository on Docker Hub.

Something I learnt today was that a docker stack only accepts pre-built images. Which essentially means you cannot use the regular build command on the fly in your docker-compose file. A quick work-around to that is to build the image locally and then publish it to a public repository from where the stack can easily access it.

  • Create a dockerfile: sudo nano dockerfile
  • Copy over contents from my dockerfile gist. Edit or update if you need any additional extensions. Do note that I am pulling a debian buster image here. An alpine image would be optimal but lets’ run with this for now.
  • Build the image locally:
    docker build -t sriperinkulam/php-7.4.3-apache-buster-plus:latest .

Edit the image and tag name as needed. If your gist file is defined correctly, the image should be ready in a few minutes. It took me a few iterations to get this working with all the extensions I needed and shouldn’t give you any errors during the build process.

  • Login to your Dockerhub repo: docker login
  • Push your image over to your repository:
    docker push sriperinkulam/php-7.4.3-apache-buster-plus:latest
  • Within a few minutes, your image should be available for easy fetch whenever needed!

 

Setting up and deploying the Known install

  • Create a folder for the known installation:

sudo mkdir known && cd known

  • Copy over my ‘known.yml’ gist from here and paste into a known.yml file.

sudo nano known.yml

This is where we use the image we built earlier! Make sure you update the passwords there-in.

  • Create an ‘app’ folder. This is where we will copy in the core known package.

sudo mkdir app && cd app

Marcus Povey does the heavy lifting of packaging the known install ‘unofficially’. Download his latest pre-packaged release from his portal.

wget https://withknown.marcus-povey.co.uk/<package>

Finally unzip/untar the package.

  • Step back into the main ‘known’ directory and deploy the stack that we defined in the known.yml file.

cd ..

DOMAIN=known.domain.org SCHEME=https docker stack deploy -c known.yml known

  • Assuming you’ve already setup the traefik container, as mentioned in my Jitsi post, you should now have Known up and running. Navigate to the domain and you should see the warmup page.

  • Key in the db credentials that you setup in the known.yml file. The database would need to be set as mariadb, since that’s how we’ve defined it in the known.yml file. Also make sure you have the Uploads folder writable by www-data:

sudo chown -R www-data /var/www/html/Uploads/

Setup your user account and proceed with the regular administration controls!

Weeknote 17 – Covid19, Webmentions and Migrations

A glimpse of the past week and the few articles, podcasts, tools, videos and music that captivated my attention:

This week I got a better handle at deploying and maintaining docker images. With that squared out, I decided to decommission the Digital Ocean servers that I had setup back in 2017 and migrate the applications over to my Hetzner server. I no longer have to rely on ServerPilot. All my applications are now deployed as docker containers fronted by #Traefik as a reverse proxy.

The jitsi server instance I deployed last week has been serving pretty well so far! I tweaked it further to enable local recording. Meena has been using it to run her online Yoga sessions and she can now also use it to record the podcast that she soon plans to start! Session recording and live-streaming is something I still need to configure in my current setup. The logs indicate it’s failing on the Jibri handshake and that’s something I’ll need to figure out once I have a bit more time in hand.

Other actionable tasks on the tech front include figuring out a swift way to deploy PHP based applications on Docker. I still need to port over my Known and PixelFed applications.

Last week, Philip nudged me to look into my webmentions. The theme I previously had wasn’t displaying them as comments. I needed a bit of a ‘creativity boost’ and decided to use a new theme. Really liked the McLuhan theme by Anders Noren and decided to give it a spin. Noticed a minor issue with the comments section and was easily able to fix it. I do however want to make a few tweaks to it, which I hopefully will get around attending to over the next few weeks. I also have my eye on Prateek‘s Zuari theme which is deeply indieweb compliant.

Last evening, Atchuth mentioned he’s now using his RPi4 as a full fledged desktop machine. He’s moved over from DietPi to the Raspbian OS. Makes sense given his current use-case. Sharath on the other hand is still using his RPi 4 predominantly as a Nextcloud server.

This week has been great in terms of re-connecting with some good old friends! We’re planning a re-union sometime next year. 20 year since we finished school! Phew!

The Khat-pat makers group that I am part of is now getting more traction and it’s awesome just hearing out the cool things that other folks are doing. This week’s session was on building drones from scratch!

I also got around connecting with the kids I taught 10 years ago! They’ve all moved on to college and are into various things now. One kid wanted to become a stellar teacher and this  is one of the best messages I’ve received so far!

Last night we played a few good rounds of Pictionary/Dumb-charades with the kids. Shasta had a terrific time just watching the ‘adults’ in their elements. You never get too old for this game!

Interesting podcasts this week

Picturing Data: Monocle

 

Interesting tools this week

The Termux app has been an amazing value-add to my phone for those quick server check-in’s!

Interesting reads this week

‘We can’t go back to normal’: How will coronavirus change the world?: 2020 will be etched in the memories of most people living today as one that drastically changed how we as humans functioned. As Covid19 spreads across various countries, it’s influencing and compelling dire actions. What was considered to be impossible at various level, is now just a quick decision point – Financial stimulus, vaccine production, remote work and schooling to name a prominent few. As we build resistance to the virus as a society and consequently as the adrenaline that’s currently rampant drains out, we’re going to see the fallout’s of our actions or several inaction’s thereof. Assuming another event of this magnitude does not occur over the next decade, the world is going to need a major overhaul to recover from the pandemic.

Beginner’s guide to PGP: A simple and easy read on PGP with actionable guidance for non-tech folks out there. It’s given that over the next few months most countries are going to erode into the privacy of its citizens. Policies that are brought into effect now for surveillance and tracing will overarchingly become the norm and will be almost impossible to retract. It’s critical that we act now to safeguard what little privacy we can. It’s never been about – ‘There’s nothing that I have to hide.’ Ignorance can and will heavily be mis-appropriated for someone else’s benefit.

Why one Neuroscientist started blasting his core: Neural pathways, stress control systems and of course the adrenal medula. Yet another fascinating read on how amazing the human body is!

How Saudi Arabia’s religious project transformed Indonesia: Religion, politics and power are so damned intertwined. If anything, its one of those defining necessities and vices of humanity.

It’s time to build: So beautifully phrased! It’s high time we dusted those tarp sheets and get to build from ground-up. Over time we’ve become complacent as a society and we need one strong re-boot!

 

Self-hosting Jitsi video conferencing

Jit.si is a terrific, secure video-conferencing #alternative to #Zoom and obviously comes with all the open-source awesomeness. Call clarity is amazing and with a room capacity of 100+ (and potentially much more, driven by network and server capabilities), it’s an absolute no-brainer to switch over. Use Jitsi Meet on the desktop or use one of their slick android, F-droid or iPhone apps on hand-held devices to organise your video conferences.

With the recent push to video-conferencing most meetings, I decided to setup my own instance. This was way more straightforward than I thought! I am currenty running it as a docker container, fronted with #Traefik for encryption. Configuration and installation steps mentioned below assume you have access to a domain name and server with docker already installed.

Setting up the server:

A records:

Create a new A record with your hosting provider to point your subdomain to the IP address of your server

SSH into server and install:

It should cost you only about 5$ per month for a decent server configuration. I already had an account with Hetzner, so I deployed my instance there.

SSH into your server: sudo ssh peri@xx.xx.xx.xx

Initialise a swarm: sudo docker swarm init

Create the traefik network: sudo docker network create --driver=overlay traefik-net

Create two new yml files – one for traefik-ssl and another for jitsi, copy over the contents from the respective files available here from ethibox and make edits as needed:

sudo nano traefik-ssl.yml

sudo nano jitsi.yml

Deploy the Traefik stack: sudo docker stack deploy -c traefik-ssl.yml traefik

Deploy the Jitsi stack: DOMAIN=meet.srkn.org SCHEME=https docker stack deploy -c jitsi.yml jitsi

If all goes well, you should now have a Jitsi instance running on your server with routing and ssl taken care of by Traefik.

Configuring Jitsi:

You now need to setup the admin account. Login to the Jitsi prosody container. (Update command to reflect your container name):

docker exec -it jitsi_prosody_1.xxxxxx bash

Set as many host credentials as you need:

prosodyctl --config /config/prosody.cfg.lua register host meet.jitsi usejitsi

Exit out of the console! You now have a fully functional self-hosted Video-conference running! Do note that the recording feature currently does not work outright. Will have to sort that out over the next few days.

Desktop and Mobile clients:

I decided to go the electron route for the desktop client. Jitsi has apps for both android and iOS that work out of the box. Just make sure you set the domains to yours.

 

Moving files from all subdirectories without the tree structure

Appa, who’s very active on Facebook, recently wanted an easy way to sift through his old videos and photos that were up on his account. I decided to take a dump of his data from the network using the download your information section. I initiated the process with just a few clicks and after a few days, I was able to download the zipped files of all the content he had posted on FB. including the likes, messages, shares etc. If you haven’t already, I would strongly recommend you get a copy of your data. I rarely interact on Facebook these days so it was even more interesting to see all the information they collect on you.

Anyways, once I had the zipped files, I wanted to sift through the folder structure and copy over all the media from it to a specific location. Using a combination of the tree and find commands this was pretty straightforward. Logging them here for later reference:

List files with tree hierarchy:

tree

Find all files within the current directory and nested sub-directories and move them to specific folder:

find . -print -name "*.*" -type f -exec mv -n {} /path/to/destination/directory/ \;

Once I had the files [Totally about 25Gb], I realised I had to move them over to my dad’s machine. Good old Syncthing came to use here to send over the files.

State of The Networks – Jan 2020

A quick rundown on the state of the home-servers I run or applications I host on the cloud.

Last week I opened up my Nextcloud instance for external access. Now, since my brother’s RPi4 was already exposed on the same network, I had to setup a reverse proxy on another RPi3 to access both simultaneously behind the router. Here’s the current setup:

Proxy server: An RPi3 running on Raspbian Buster Lite with HAProxy installed to handle the reverse proxy. Here’s the gist of the code that’s handling all the heavy lifting. Since SSL is handled by the other servers themselves, all I needed was a quick pass-through handshake from HAProxy.

Server 01: An RPi3 running on NextcloudPi essentially serving Nextcloud for all my file needs. Data is simultaneously backed up on a couple machines within the network. Decided against a remote backup [S3/Backblaze] for now.

Server 02: An RPi4 running on Diet-Pi and serving Pi-Hole and a Nextcloud instance. I’ve turned off DHCP on the Orbi router and delegated that to the Pi-Hole. Both the Pi-Hole and the router have static IPs assigned to the SBCs and my trusty Dell machine based on their MAC address. I’m debating if I should move the Pi-Hole over to the Proxy Server…

Beyond the home-lab, I have a droplet with DigitalOcean serving this website and a few other portals I manage. ServerPilot runs in the background on that droplet taking care of all the critical needs. I do intend to shift this over to a home-lab once I get hold of my ODroid XU4 which is currently in the Uganda shipment several thousand miles away in Chennai!

Early this year, I also procured a Hetzner cloud instance to test its stability and see if I could move over certain portals to it. Should say I’m pretty impressed! Running on Debian Stretch and powered by Yunohost, I installed PixelFed and Wallabag. Installation has never been any easier! One drawback for sure is that the code-base may be a bit lagged as it gets deployed on Yunohost. Nevertheless, it’s pure magic to see things getting installed with just a few clicks and not much back-end work.

And then, I manage a Moodle Bitnami instance running on an AWS instance. I intend to move it over to the Hetzner cloud over the next month or so.