While I've got a big 🧵 going on Twitter for the mirror.fcix.net project @warthog9 and I are working on twitter.com/KWF/status/1509276

I figured Mastodon would be a fitting place to put the thread for the other half of the same project, where John and I are building a fleet of "Micro Mirrors" to help distributions continue to operate for free.

So we're building mirrors and then passing the hat around to fund this silly little project in exchange for entertainment. paypal.me/kennethfinnegan

🧵

As part of the MFN (mirror.fcix.net) project, John built an amazing Grafana/influx telemetry system that takes every HTTP request in the logs and parses it out into what project, which release, what ASN, and how many bytes the request was, which is giving us an amazing level of visibility into what's going on with that large software mirror.

This lead to the realization that I'm now able to calculate what I'm calling the "CDN efficiency" of every project, which is the number of bytes served per day divided by the number of bytes used on disk.

Look up the day's "bytes served" and divide by the output of "du -h -d 1 /data/mirror"
twitter.com/KWF/status/1510028

And we get numbers like the following:
centos 9.02
manjaro 2.55
epel 1.97
rocky 0.32
fedora 0.23
centos-altarch 0.21
almalinux 0.20
centos-stream 0.11

So for example, the 1.97 CDN efficiency for EPEL means that for the 269GB used on our mirror to host that project, we're serving 1.97 times that (530GB) per day.

Which lead to the realization that even though we're using almost 40TB on MFN to host projects, something like 60-70% of our actual served bandwidth is coming from only the most popular 3TB worth of files.

So while these big heavy-iron mirrors are needed and valuable for a foundational capacity and the long tail of projects, what if we added more mirroring capacity and distribution by building additional mirrors which are:
* smaller
* cheaper
* easier to host

And scatter them around in more places on the Internet to combat the consolidation that seems to be pervasively happening across all strata of the Internet?

Enter the concept of the Micro Mirror. github.com/PhirePhly/micromirr

What if we built a tiny Micro Mirror that only had 2TB of storage in it and only served a few of the smallest and most popular projects?

Not as a replacement for the monolithic heavy iron mirrors, but as a way to take some of the hottest load off of them so the heavy iron mirrors can spend more of their time doing what they're good at, which is serving the long tail of the less popular content?

So for our initial proof of concept, we're using the HP T620 thin client as our base, since these are cheap and plentiful on eBay, low power, fanless, and I literally stock a pile of them in my apartment for projects where I need servers with N→0 horsepower.

Two DDR3 SODIMM slots, an M.2 SATA slot, an mPCIe card, a 1GbaseT NIC, and it uses <15W of power from a 19V soap on a rope power supply.

2x4GB RAM sticks and a 2TB WD Blue SSD, and we're able to put together the whole box for <$250

So for less than the cost of a single of the 16TB hard drives used in MFN, we're able to build a Micro Mirror.

The big question that we're working to answer now is whether this is a more or less effective use of resources to support Linux software distribution.

Build another monolith vs build 10 more micro mirrors to shed load off the existing monoliths.

Initial benchmarks on the T620 hardware look very good. It can read and write to the SSD at 3-4Gbps, so the amount of RAM in the MicroMirror beyond "enough to operate the control plane" doesn't matter anywhere near as much as on a spinning rust mirror, because shoveling content from SSD to 1Gbps Ethernet is effectively free.

The 4 core GX-415 CPU in these thin clients is able to handle serving line rate small HTTPS requests out the NIC without even maxing a single core.

Which I think is both a testament to how freaking powerful 15W TDP CPUs have gotten and how battle hardened and optimized software stacks like Nginx are for performance.

Not usable as an end user desktop running a web browser, but able to handle updates for a million servers per day. 🙄

So this seems encouraging as the basis for a hardware platform that meets our needs:
1. cheap
2. low power
3. easy to transport
4. Small enough that colocating it won't be a burden for hosts

So we can now build the hardware and manage the instance, and for each server we just need to find an ISP with 1Gbps of surplus egress transit bandwidth and the lack of manpower/interest in putting in the effort to manage their own server.

They give us an IP address, space, power, and then can just forget about this thing because it's smol and in the absolute worst case is only pumping out 1Gbps of what can be treated as entirely scavenger class traffic below their customer data.

So our initial alpha deployment is sitting two racks down from my AS7034 Phirephly Design network in Fremont and hosting six projects on a 2TB SSD. codingflyboy.mm.fcix.net/

We're still working on the cosmetics of it so it doesn't look like a default Nginx install; turns out the autoindex formatting mangles .xml files getting served from disk 🤦

To keep it interesting, we've added the rule that we only deploy one micromirror per building, so HE FMT2 gets checked off the list.

@smitty You're going to have to set up a mirror all on your own buddy. Sorry

@kwf I’m trying to get more ham radio related projects on my systems. Make use of that AMPR network.

Follow

@smitty Definitely a more _interesting_ use of your network than pumping out Linux updates.

I'm just looking for idle capacity out in the world and shave off a little bit to be useful

· · Web · 0 · 0 · 1
Sign in to participate in the conversation
A Social Front Organization

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!