Computer :(

/home/rrix:blog:tags:cgit:rss

Off the Grid

In my last blog post, I mentioned briefly how I have been migrating more and more of the services I rely on off of the public internet and 3rd party cloud services and on to a private VPS server sitting protected in my home network. Most of the services aren't available on the wide internet, but a cheap router running pfSense has allowed me to provide highly-available OpenVPN service to myself and my roommates. This affords me some amount of knowledge that my data is in the hands of people who value it (myself, mostly) after a couple-hundred dollar investment in parts and a lot of time I'd spend fucking with servers anyways. Since completing the bulk of this migration, I have begun investigating how to make this more resilient to system outages and internet downtime caused by our bumbling but lovable ISP San Bruno Cable picking up bad BGP routes and randomly cutting San Bruno Cable off from any traffic outside of San Francisco. Part of this comes down, simply, to the ideas generally prolific in high-availability computing: build failover networks in to the system, and build failover processes in, monitor the crap out of everything. Of course, having everything within my home network affords me little on any of these fronts, and while I could continue forging new Rackspace instances, that shit adds up quickly and my income can only sustain so much cloud-playgrounding.

I've had a strange fascination with the ideas behind darknets, the idea of a private internet inside the public one, ideas which power Tor and at a smaller scale VPN Darknets such as ChaosVPN and DN42, the latter of which I was a member of both at HeatSync and within my private network back in Arizona. These networks are interesting in how they mimic the resilience of internet infrastructure almost explicitely. In the case of DN42, it's quite literally informal peering agreements handled via BGP, but using OpenVPN client-to-client or TINC peering instead of explicit physical fibre or cable peering.

Enter Kickass Systems, a play on my apartment's name of Fort Kickass, a chance to buy a shitty gTLD and my plan to build a distributed darknet among my friends' LANs, a space to provide a playground for us to privately host services which we don't have the bandwidth to manage individually. In part, this is my attempt to enforce an SLA among my own projects, but it also brings more privacy-minded folks in to the fray. It also gives me an excuse and motivation to open source the growing list of Ansible roles I have powering my personal infrastructure. There's some pretty cool shit tucked away in there, I think.

I've been building out an informal governance structure which prevents the system from becoming too centralized on Fort Kickass while not going entirely down the decentralized-darknet rabbit hole.

It boils down in to a few simple tenants:

  • kickass.systems has a public DNS tree associated with it, but also plays double-duty as our internal LAN TLD. I'm not entirely sure of the implcations of this yet, but properly configured VPN clients and LAN routers should be able to handle this just fine.
  • Each member hangs their LAN off of a subdomain of kickass.systems; in the case of my home LAN, that's fort.kickass.systems. Each of these member LANs are prescribed a range of private IPs, probably something within 10.0.0.0/8 that they use for all their LAN DHCP.
  • Core Services are elected by the members via some sort of limited consensus. This means that the Root CA is handled by whomever is most technically able and willing; the same goes for the core IRCd, the core single-sign-on architecture, etc. If someone doesn't like how a particular system is maintained or structured, they can reimplement it and petition the membership to evaluate and consider it for replacement as a Core Service.
  • Core services can be defined and undefined as deemed necessary by the same yet-to-be-determined limited consensus of the membership.
  • Each member can publish any number non-core services in a member-public wiki directory, or choose to keep them private/invitation only.
  • Published services which require authentication must use a single-sign-on provided as a core-service.
  • Each member is obliged to enroll in a few core-services, such as a distributed backup filesystem such as Ceph, Gluster or TahoeLAFS, an asyncronous discussion forum of some variety and the above mentioned single-sign-on.
  • Published services must be installable via Ansible roles by any other member.
  • Membership can be revoked via some sort of limited consensus of the membership.

Okay, so I guess those aren't 'a few', but they seem simple enough. The governance is simple enough to change as necessary, while still maintaining enough structure to not mire itself in silly naval gazing. I think that with the handful of interested parties I have surrounded myself with, every publicly hosted service we rely on could be accounted for with minimal stress and overwork on our own time, while providing us valuable experience in systems administration and network architecture design.

If you've any interest in syncing up with me on this stuff, find me in #ftkickass on freenode, or email or twit me or something.