k8s ConfigMap example

Just a little configmap example

Posted in Uncategorized | Leave a comment

Ansible json and selecting attributes

Today i had to work out how to pull an attribute out of some json provided by AWS when creating an AMI. Basically i tag the AMI snapshot with the same tags i use for the AMI and it’s source instance.

The below code has some interesting bits to it.

You can easily add json to your Ansible yaml – lines 10-30 with the ‘|’ bar character.

Converting json so that Ansible can parse it is done with the ‘from_json’ filter.
Wrapping that output in brackets ‘()’ allows you to immediately access the json and using the ‘.’ dot notation you can follow your attributes to your required data. Line 36.

That is until you have an unordered list of objects within an attribute. Line 14. After doing 7 other different OS builds my code broke as this time the unordered list had more than one item in it (lines 15,22,25). All of a sudden snapshot_id was not present in the ‘first’ item. It is in the json code, but as it’s an unordered list, its not always. line 43

Thus a better way of selecting the right object that contained the attribute i wanted.

‘selectattr’ is your friend here, with the subsequent test ‘defined’ – this allows you to pull a list of objects back that contain the attribute you wanted – that is the ones where ‘snapshot_id’ is defined. In my example this is in one of the three objects.

In line 47 you can see me successfully grab the right object. In line 49 comes the weird part. After wrapping with brackets ‘()’ you think i’d be able to access the objects attributes via dot notation. But no, you have to use the old style square brackets ‘[]’.

Anyway it works and my code successfully builds again – woo hoo.

The code

Code
ok: [localhost] => {
    "msg": [
        "Block Device Mapping",
        {
            "/dev/sda1": {
                "delete_on_termination": true,
                "encrypted": false,
                "size": 8,
                "snapshot_id": "snap-0e24e63b741d17279",
                "volume_type": "gp2"
            },
            "/dev/sdb": {
                "virtual_name": "ephemeral0"
            },
            "/dev/sdc": {
                "virtual_name": "ephemeral1"
            }
        },
        "First item of unordered dict",
        "/dev/sdb",
        "Select only those objects that contain snpshot_id - by making it an ordered list first",
        [
            {
                "delete_on_termination": true,
                "encrypted": false,
                "size": 8,
                "snapshot_id": "snap-0e24e63b741d17279",
                "volume_type": "gp2"
            }
        ],
        "First",
        {
            "delete_on_termination": true,
            "encrypted": false,
            "size": 8,
            "snapshot_id": "snap-0e24e63b741d17279",
            "volume_type": "gp2"
        },
        "Snapshot id",
        "snap-0e24e63b741d17279"
    ]
}
Posted in Uncategorized | Leave a comment

playing with micro-k8s

So docker swarm has gone (well almost). k8s or Kubernetes is the major player now.

Quick disclaimer: this is about a local development environment, NOT a production environment, Caveat Emptor!

I’ve been using Docker and Docker Swarm for years, it’s really easy to use, but things move on. Building your own home k8s cluster is a bit much, but luckily for the rest of us Canonical created MicroK8s.

In their words:

Zero-ops Kubernetes for workstations and edge / IoT

A single package of k8s for 42 flavours of Linux. Made for developers, and great for edge, IoT and appliances.

https://microk8s.io/

Installation is a breeze. Just use snap. But enough of that, Canonical have great resources on how to do all of that. All i wanted to do, was host an app (get-iplayer) and instead of doing it the old way on Docker, i wanted to use k8s and to see what is what πŸ™‚

So, in doing this, you still need to develop it the old way on Docker first. Simple enough.

Secondly, you then need to upload the docker container into your registry. Canonical have thought of this and you can easy enable a k8s local registry.

Thirdly you need to write a yaml configuration to spin up the container in k8s and spin it up.

All easy stuff you might think. Well not exactly…

local image registry

Enabling the k8s registry is good and simple, but pushing the docker image from docker into the k8s registry and then pulling it back from k8s takes a little re-config.

There are a lot of blogs in detail already about this. But bascially you need to make sure that you have the right IP set in the Docker daemon config.

Also make sure that you tag the images correctly. I didn’t initially while following someone elses blog and got all sort of weird error messages that took me down rabbit holes i didnt need to go down. The IP below is that of my host running microk8s. The port number is the one created by microk8s

$ ss -plant | grep 32000
 LISTEN     0      128         :::32000                   :::*


$ cat /etc/docker/daemon.json
{
"insecure-registries" : ["192.168.0.10:32000"]
}

Then you need to update the k8s template to allow it to resolve the requests when creating containers

vi /var/snap/microk8s/current/args/containerd-template.toml

    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        [plugins.cri.registry.mirrors."192.168.0.10:32000"]
          endpoint = ["http://192.168.0.10:32000"]

Then restart k8s

$ microk8s.stop && microk8s.start

This enables you to push from Docker and pull from k8s

$ docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
alpine                        latest              cc0abc535e36        13 days ago         5.59MB
marginal/get_iplayer          latest              d99cb5a2379c        5 weeks ago         108MB

$ docker tag d99cb5a2379c 192.168.0.10:32000/get-iplayer:latest

$ docker push 192.168.0.10:32000/get-iplayer

Then you can test pull it

$ docker pull 192.168.0.10:32000/get-iplayer
Using default tag: latest
latest: Pulling from get-iplayer
Digest: sha256:aaadeb4a4cbef74cb1a661113b7e7ce1c4c8d23e414b248a6286b880fa3f38a1
Status: Image is up to date for 192.168.0.10:32000/get-iplayer:latest

woo hoo!

Next do a test pull from k8s repository – note the –plain-http flag required.

$ microk8s.ctr image pull --plain-http 192.168.0.10:32000/get-iplayer:latest
192.168.0.10:32000/get-iplayer:latest:                                             resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:aaadeb4a4cbef74cb1a661113b7e7ce1c4c8d23e414b248a6286b880fa3f38a1: exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9c5b871b3a679bacf01b37a69ae461434331e22cb4d473ad77335b4a528e1dad:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:21b2089d738c4348b143a7132af7debfc9b84d6012081591c2f77c1608ff78c1:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:9d48c3bd43c520dc2784e868a780e976b207cbf493eaff8c6596eb871cbd9609:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:4d46728344b67ea27779b74ae5f6f081a3ae7e33b60f94a6346c488ae72f3bcc:    exists         |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:2c1b272ceac1214e0d8cfdfb5c97913835bc770644a7b97186953b1b60672ce0:    exists         |++++++++++++++++++++++++++++++++++++++| 
config-sha256:d99cb5a2379c77af2e8f90f9181d0421395633d3ecf6bdbe512cce1caf5da42e:   exists         |++++++++++++++++++++++++++++++++++++++| 
elapsed: 1.1 s                                                                    total:   0.0 B (0.0 B/s)                                         
unpacking linux/amd64 sha256:aaadeb4a4cbef74cb1a661113b7e7ce1c4c8d23e414b248a6286b880fa3f38a1...
done

Just awesome πŸ™‚
So now we know if we ask k8s to pull an image from our own registry it will do it.

DNS and internet access

It’s never DNS, ever…

But it is.

It’s ALWAYS worth testing DNS and internet access BEFORE you do anything.
It’s easy to do with a couple of commands (below)

Internet access

Just fire up a busybox image and test from there.

$ microk8s.kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 5 1.1.1.1
If you don't see a command prompt, try pressing enter.
64 bytes from 1.1.1.1: seq=1 ttl=59 time=10.039 ms
64 bytes from 1.1.1.1: seq=2 ttl=59 time=10.046 ms
64 bytes from 1.1.1.1: seq=3 ttl=59 time=44.983 ms
64 bytes from 1.1.1.1: seq=4 ttl=59 time=13.268 ms

--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 10.039/17.738/44.983 ms
pod "busybox" deleted

DNS name resolution

The test is easy, but the fix according to the internet was convoluted and difficult. Then i found a very easy per container way to fix it.

$ microk8s.kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 5 google-public-dns-a.google.com
If you don't see a command prompt, try pressing enter.

ping: bad address 'google-public-dns-a.google.com'
pod "busybox" deleted
pod default/busybox terminated (Error)

pfft!

$ microk8s.kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.152.183.1 kubernetes.default.svc.cluster.local

Yup, that’s not going to do it, is it?

The fix was to update the containers /etc/resolve.conf with usable values and this then skips using the kube-dns service in k8s. As i’m not doing any pod to pod connectivity this was fine.

Add the following to each container definition in the yaml

      dnsPolicy: "None"
      dnsConfig:
        nameservers:
          - 192.168.0.2
        searches:
          - homenetwork.local

et voila…

$ microk8s.kubectl exec -it get-iplayer-76d8bd85c5-rhp86 -- ping -c 5 www.bbc.co.uk 
PING www.bbc.co.uk (212.58.244.70): 56 data bytes
64 bytes from 212.58.244.70: seq=0 ttl=55 time=14.833 ms
64 bytes from 212.58.244.70: seq=1 ttl=55 time=15.297 ms
64 bytes from 212.58.244.70: seq=2 ttl=55 time=47.389 ms
64 bytes from 212.58.244.70: seq=3 ttl=55 time=15.393 ms
64 bytes from 212.58.244.70: seq=4 ttl=55 time=16.063 ms

--- www.bbc.co.uk ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 14.833/21.795/47.389 ms

Woo hoo, number two!

External access to your container

Docker is very much easier to setup for access to your container, via a bridged network. It’s an out of the box thing.

Again the tinterweb / intertubes was full of loads of advice for production worthy fixes. This site was very detailed in the explainations. Very helpful indeed. https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/

But in the end i wanted the easier fix, which is a single statement to allow the container to see the host NICs.

ports:
  - containerPort: 1935
    hostPort: 1935

The hostPort setting, while NOT production or best practice worthy, gets you up and running very quickly. As an aside, i’d probably use k8s loadbalancers for more serious work.

And the rest

At this point i had a container that i could access remotely and that could access the internet properly. Yay me!

All i had to do then was access the underlying host filesystem where i was keeping the app config and media.

         volumeMounts:
          - mountPath: /output
            name: output-volume
      # local host storage
      volumes:
      - name: output-volume
        hostPath:
          # directory location on host
          path: /mnt/NAS_Video
          # this field is optional
          type: Directory

Finally…

All in all it was very DockerCompose like, which i liked and it enabled me to throw up a quick k8s development environment pretty quickly without much trouble.

The k8s dashboard

My app running πŸ™‚

And here is the final code…

my code

Posted in Uncategorized | Leave a comment

Brother printers – finding the toner levels

You get asked some odd things occassionally. Well they’re not really odd, but a little left field maybe.

Q: “Without using SNMP, how can you find out the toner levels on a printer.”

Inkjets are easy. you install the ‘ink’ app. But for lasers, well I’ve got Brother printers at home and that took some digging. HP printers don’t seem to have this option as far as i can tell (so hard luck for you all with those nice expensive printers).

Basically, you need to trawl the reaches of the internet for hidden or OEM PJL commands that help your specific printer brand.

This very helpful site https://tosiek.pl/pjl-variables-for-brother-printers/ listed a lot of non-HP PJL commands for my brand. I’d really like to know where they got them from.

Here is what i came up with a little bit of netcat:

$ cat supply.pjl | nc -q 2 192.168.0.181 9100 | egrep "PJL|REMAIN|SN_|MODEL"
@PJL INFO BRSUPPLY
LAS_MODEL_CODE="84E-822:Ver.1.28"
LAS_MODEL_NAME="Brother HL-L8260CDW series"
LAS_BRMODELCODE="84E82200104"
LAS_KTONER_REMAIN="77.00"
LAS_CTONER_REMAIN="60.00"
LAS_MTONER_REMAIN="61.00"
LAS_YTONER_REMAIN="61.00"
LAS_PFKITMP_REMAIN="50000"
LAS_PFKIT1_REMAIN="99244"
LAS_BELT_REMAIN="99143"
LAS_FUSER_REMAIN="99150"
LAS_SCANNER_REMAIN="99150"
$ cat supply.pjl | nc -q 2 192.168.0.188 9100 | egrep "PJL|REMAIN|SN_|MODEL"
@PJL INFO BRSUPPLY
LAS_MODEL_CODE="8C5-H46:Ver.P"
LAS_MODEL_NAME="Brother MFC-L2700DW series"
LAS_BRMODELCODE="8C5H4600104"
LAS_TONER_REMAIN="21.00"

I am assuming that BRSUPPLY (which is very verbose) means Brother Supply variable.

Anyway, it looks like i need to buy a black toner cartridge very soon. 21%

pretty cool huh? πŸ™‚

Posted in Uncategorized | 4 Comments

Primer: Understanding the Cloud Native Impact on Architecture

https://thenewstack.io/primer-understanding-the-cloud-native-impact-on-architecture/

In today’s race towards digital transformation, architectural best practices are often sacrificed for speed. Yet the gained edge may be short-lived. Technology is developing at a rapid pace and enterprises must leverage future innovations β€” such as cloud native technologies β€” to pivot quickly and meet market demands.

While a well-designed, modular cloud native architecture requires a little more time and resources during the planning and implementation phase, it will also enable IT to adapt and extend it as new technologies hit the market. A system developed without these considerations in mind may be up and running faster but will struggle to adapt as quickly as business needs evolve.

This Is Why IT Cares

  • By abstracting at an infrastructure level, all underlying resources merge into one giant pool of resources developers can tap into. Differences don’t matter anymore. Kubernetes will deal with all those details, not the developer.
  • Containers package everything the application needs so it can be moved between environments on runtime with no downtime. As far as the containerized app is concerned, it’s running on Kubernetes, whether on-prem or GCP is irrelevant.
  • Cloud native services are available across your environments. So if you move your containerized app from on-prem to GCP, you can still use the same services β€” no configuration required.

A great write up on the impact of Cloud Nativeness on Architecture, well worth the read.

Posted in Uncategorized | Leave a comment

Life as a Linux system administrator

https://www.redhat.com/sysadmin/life-linux-system-administrator

If you’re thinking about becoming a system administrator or continuing your career as one, you should read this excerpt from one system administrator’s experience.

A great write up, which fits pretty close to how i ended up a Linux SysAdmin. (though I’m not sure that’s where i am any more, hehe πŸ™‚ )

Posted in Uncategorized | Leave a comment

Ansible – reading a remote yaml file

Lately, I had to read in a remote yaml file to retrieve a value from it.

As file lookups are only for local Ansible Controller files, doing a cat and then from _yaml seemed the easiest way around it.

my code
Posted in Uncategorized | Tagged | 2 Comments

A (much) faster VMware dynamic inventory for Ansible

I wrote a cutdown version of the Ansible VMware Dynamic Inventory script based off a vSphere Python SDK example.

The original script vmware_inventory.py takes 60secs to run on 20 hosts, where as my code will take 1-2secs for 140 hosts πŸ™‚

Sitting waiting for the cache to update was a real pain, now i don’t even need the cache.

My code

Posted in Uncategorized | Tagged | Leave a comment

How to hack gmail and twitter, plus others

Sadly it’s really easy…

If you have the persons phone and they have enabled SMS-based authentication.

https://www.darkreading.com/endpoint/i-hacked-my-accounts-using-my-mobile-number-heres-what-i-learned/a/d-id/1336315

It’s almost a click-bait header, but not quite. πŸ™‚

Posted in Uncategorized | Leave a comment

AWS Game Day – Unicorn Rentals

A day of fun @work.

Yesterday i was extremely fortunate to be able to attend an AWS Game Day.

This has been blogged about before by many others (jon.sprig.gs), but i thought i would put my take on it, plus some pics.

The usual simulation was involved. The company that provides unicorn rentals has CEO that disappears along with all tech staff who ran off to a bitcoin startup.  You are brought in as new hires to fix the little infrastructure that exists and firefight the rest of the day.  

People are split into teams of 3-4 and are competed against each other.  AWS provided quite a few prizes for the winners.

IMG_20191114_090314
(lots more prizes not shown – but the soft toys were the top prizes πŸ™‚ )

Points were awarded for complete transactions and how quickly those transactions were done. Points were deducted for the amount of infrastructure you were using, so 20x EC2 instances cost you quite a few points every so often.

Basically this gets your brain ticking in over drive.  Everything you can access (and more) on AWS is at hand to utilise in keeping things afloat.  We were asked to keep the instances to t2.micro and only allowed a certain maximum, but that added to the fun.  TBH, this is just like RL, any customer you are working with has constraints, normally budgetary, that you have to work with.

Considering the amount of FDEs in the room  and quality of skilled people there, we cheekily picked a team name – “aiming_for_third”.  I think that subliminally allowed us to have more fun and use this training day for what it was (an almost unlimited thrashing of AWS services with a light-hearted goal at the end) rather than worry about a straight out competition.  This seemed to help us long term (more on that later on).

I should point out here, that i am one of those people that learns by doing.  PowerPoint presentations and youtube videos are great for some people but i have to roll my sleeves up and get my hands dirty, then it sticks. So this sort of training is just right for me.

Our team split the work load into chunks depending on skill set and the need to play with certain services. Once someone found they were at an impasse we all crowded around them to help out.  All very DevOps, but not formally put in place.  It just happened (as it should do).

The teams hard at work πŸ™‚ Matt and Dan, from team aiming_for_third in front left. Adam was grabbing a coffee.

On your marks, get set, go….

AWS gives you some very useful web pages to monitor your progress within the competition, we didn’t quite utilise this fully at the start, but once our AWS trainer pointed out our little mistake we quickly realised what we were missing out on.  Take away here is, read and click on everything.

We were constantly firefighting and as the simulation starts with the rental website being down thus not generating any revenue for the customer.  Yikes!

The runbook (support team documentation) was sparse to say the least.  But being from a 4th line support background this didn’t phase me and we picked through the few short pages of old, current, missing and wish list details to get a quick handle on the infrastructure.

Why is it not working, again?

Nefarious incidents while you are trying to maximise the amount of traffic the website could handle happened throughout the day, this was put down to ex-employees with admin access trying to bring the site down. Those that know knew this was really ChaosMonkey and we spent some time trying to block it from upsetting our infrastructure. Though in the end it was embedded so deeply (and we had slightly restricted IAM permissions) we couldn’t do it. A shame – but that they tell us was on purpose – but that could stop us from trying.

Better infrastructure

Our team came to the quick conclusion that the infrastructure in play was not fit for purpose. So we quickly discussed the need to shore up the current infrastructure and the plan for the replacement infrastructure.

Understanding the website, back end apps and how it works is key here. Knowing the requirements and constraints enabled us to work out the direction we were going to take.

All very agile…

Yup, it was a game, so we took risks. We updated production code and infrastructure while it was serving pages and generating revenue. Not something i’d necessarily condone in normal circumstances. But we were here to have fun and did we.

Dropping a few connections here and there was acceptable, when we created new infrastructure and swapped end points over. Watching the game console screens showing the scores with the completed and not completed transactions was helpful and a good poke with a sharp stick to get things working faster/better. 503 errors were the bane of our day.

Unicorns 2.0… slight spoilers here…

We decided to containerise the app. It was the next logical step. I personally wanted to use Lambda but getting a Go binary into Lambda was outside my skill set and time to Google, so we stuck with what we knew and were familiar with. Remember we are still firefighting here while we do this πŸ™‚

A minimal container base (alpine) and the Go binary (from the website backend) was used along side a fresh load balancer, cache and with appropriate scaling to give us a new infrastructure that we could blue/green our development.

Now what? It’s broken again…

We were getting help from ChaosMonkey throughout the day, the first hit was the worst where lots of things were updated maliciously or in fact just down right deleted.

Pretty much most of the day EC2 instances just disappeared for no reason. Of course scaling took care of this, but when you were watching the points coming in and the transactions all of a sudden dropped with 503 errors then a quick look at your Cloudwatch dashboards told you not to worry about it, it wasn’t anything you had done and it was healing itself.

One of our own issues that we contended with throughout the day was the memory on the t2.micro was limited and our containers seemed to be using more that we wanted them to. This hampered our container scaling to how much we wanted to use per EC2 instance. So we spent a lot of time once we were up and running trying to diagnose this to maximise the throughput we could get.

Black Friday… Marketing bods get busy…

At the end of the day, we were told that the Marketing team had done their job well and we should expect large amounts of traffic withing the next few minutes.

We were trailing at this stage in 4/5th place, as we had spent a lot of time creating the right infrastructure for the customer. Plus remember our team name? We weren’t in it to win it, just really have that hands on time and learn by doing…

As long as we could get to 3rd place we would be happy as we then had done what we said we had started out to do.

As the traffic ramped up we began to notice something, our infrastructure was holding out. Our transaction trend was far out pacing everyone else by a large factor. ChaosMonkey was still doing it’s thing, but the scaling took care of that.

Soon we were in second place with only a few minutes to go. Could we dare hope…

In the end…

We won!

Much to our teams delight. Our transaction trend never faltered as you can see in the picture below.

Fluffy pink unicorns for the winners.

256,000 points, wow!

and a sticker for my laptop

Pink unicorns are cool

A thoroughly enjoyable day, working hard, mashing the keyboard and mouse.

If you ever get chance to go on a AWS Game Day, then i really recommend you do so.

Thanks

To John Cook for the gif of us winning. πŸ™‚

Finally, thanks to AWS for doing this with us. They were great, very knowledgeable, supportive, funny and patient with us.

Posted in Uncategorized | 1 Comment