While I've got a big 🧵 going on Twitter for the mirror.fcix.net project @warthog9 and I are working on twitter.com/KWF/status/1509276

I figured Mastodon would be a fitting place to put the thread for the other half of the same project, where John and I are building a fleet of "Micro Mirrors" to help distributions continue to operate for free.

So we're building mirrors and then passing the hat around to fund this silly little project in exchange for entertainment. paypal.me/kennethfinnegan

🧵

As part of the MFN (mirror.fcix.net) project, John built an amazing Grafana/influx telemetry system that takes every HTTP request in the logs and parses it out into what project, which release, what ASN, and how many bytes the request was, which is giving us an amazing level of visibility into what's going on with that large software mirror.

This lead to the realization that I'm now able to calculate what I'm calling the "CDN efficiency" of every project, which is the number of bytes served per day divided by the number of bytes used on disk.

Look up the day's "bytes served" and divide by the output of "du -h -d 1 /data/mirror"
twitter.com/KWF/status/1510028

And we get numbers like the following:
centos 9.02
manjaro 2.55
epel 1.97
rocky 0.32
fedora 0.23
centos-altarch 0.21
almalinux 0.20
centos-stream 0.11

So for example, the 1.97 CDN efficiency for EPEL means that for the 269GB used on our mirror to host that project, we're serving 1.97 times that (530GB) per day.

Which lead to the realization that even though we're using almost 40TB on MFN to host projects, something like 60-70% of our actual served bandwidth is coming from only the most popular 3TB worth of files.

Follow

So while these big heavy-iron mirrors are needed and valuable for a foundational capacity and the long tail of projects, what if we added more mirroring capacity and distribution by building additional mirrors which are:
* smaller
* cheaper
* easier to host

And scatter them around in more places on the Internet to combat the consolidation that seems to be pervasively happening across all strata of the Internet?

Enter the concept of the Micro Mirror. github.com/PhirePhly/micromirr

· · Web · 3 · 0 · 4

What if we built a tiny Micro Mirror that only had 2TB of storage in it and only served a few of the smallest and most popular projects?

Not as a replacement for the monolithic heavy iron mirrors, but as a way to take some of the hottest load off of them so the heavy iron mirrors can spend more of their time doing what they're good at, which is serving the long tail of the less popular content?

So for our initial proof of concept, we're using the HP T620 thin client as our base, since these are cheap and plentiful on eBay, low power, fanless, and I literally stock a pile of them in my apartment for projects where I need servers with N→0 horsepower.

Two DDR3 SODIMM slots, an M.2 SATA slot, an mPCIe card, a 1GbaseT NIC, and it uses <15W of power from a 19V soap on a rope power supply.

2x4GB RAM sticks and a 2TB WD Blue SSD, and we're able to put together the whole box for <$250

So for less than the cost of a single of the 16TB hard drives used in MFN, we're able to build a Micro Mirror.

The big question that we're working to answer now is whether this is a more or less effective use of resources to support Linux software distribution.

Build another monolith vs build 10 more micro mirrors to shed load off the existing monoliths.

Initial benchmarks on the T620 hardware look very good. It can read and write to the SSD at 3-4Gbps, so the amount of RAM in the MicroMirror beyond "enough to operate the control plane" doesn't matter anywhere near as much as on a spinning rust mirror, because shoveling content from SSD to 1Gbps Ethernet is effectively free.

The 4 core GX-415 CPU in these thin clients is able to handle serving line rate small HTTPS requests out the NIC without even maxing a single core.

Which I think is both a testament to how freaking powerful 15W TDP CPUs have gotten and how battle hardened and optimized software stacks like Nginx are for performance.

Not usable as an end user desktop running a web browser, but able to handle updates for a million servers per day. 🙄

So this seems encouraging as the basis for a hardware platform that meets our needs:
1. cheap
2. low power
3. easy to transport
4. Small enough that colocating it won't be a burden for hosts

So we can now build the hardware and manage the instance, and for each server we just need to find an ISP with 1Gbps of surplus egress transit bandwidth and the lack of manpower/interest in putting in the effort to manage their own server.

They give us an IP address, space, power, and then can just forget about this thing because it's smol and in the absolute worst case is only pumping out 1Gbps of what can be treated as entirely scavenger class traffic below their customer data.

So our initial alpha deployment is sitting two racks down from my AS7034 Phirephly Design network in Fremont and hosting six projects on a 2TB SSD. codingflyboy.mm.fcix.net/

We're still working on the cosmetics of it so it doesn't look like a default Nginx install; turns out the autoindex formatting mangles .xml files getting served from disk 🤦

To keep it interesting, we've added the rule that we only deploy one micromirror per building, so HE FMT2 gets checked off the list.

CodingFlyBoy is now live for epel, opensuse, and arch linux, putting it at 3/6 projects live.

My gut feeling is that the sweet spot for these micro mirrors is going to be about 100Mbps of baseline load, so we have 900Mbps of burst capacity for individual requests / something happening like a release day or a CVE.

So if we end up with this thing running with its NIC at >20% capacity ALL the time, I think we overshot and need to put fewer projects on each Micro Mirror.

I also think it's a good demonstration of how the MM with 8GB of RAM really are leaning on the fact that they're entirely SSD based.

This micro mirror with three projects is reading more from disk (left) than our big iron mirror with 384GB of RAM (right).

I so badly want to do something cute like hack a 2TB laptop spinner into these T620s, but it just doesn't make sense for how hot the entire working set is on these small mirrors.

There's an undergrad senior project for a stats major in here about how to model HTTP requests to a software mirror and what a good metric for "how close to full capacity" a mirror is operating. My 100Mbps base continuous load is just a gut feel.

The poisson distribution is a trash model for HTTP requests that should never be used when discussing load on update mirrors, but WHAT IS a good model?

We have caused our first negative impact for a project with our Micro Mirrors. :(

Not really MM specific, but it turns out that the sample code online for making pretty autoindex using XSLT has Nginx try and apply the template to EVERY file served, not just the generated directory listings.
github.com/AlmaLinux/mirrors/p

Solution was to be more specific that our autoindex and styling is only applied to paths ending in a forward slash
github.com/PhirePhly/micromirr

Someone decided to crawl codingflyboy.mm.fcix.net for all of EPEL over HTTP, which is... a choice.

But the hardware didn't really care. It was still able to serve it as fast as the other end was able to ask for it.

Good progress is being made by @warthog9 to get the fleet of Micro Mirrors to all stream their telemetry and logs back to a single watchtower host, so we're going to be able to monitor our fleet from a single pane of glass and generate ridiculous stack charts to compare this fleet of $200 thin clients against our $3500 heavy iron mirror.

It's the last 24 hours of data for just the codingflyboy.mm.fcix.net mirror. Pretty respectable for a software mirror running on a thin client!

Vs the last 24 hours for mirror.fcix.net, with only the same four projects plotted.

The interesting debate comes up of if it makes sense to only send 10% as many requests to the Micro Mirror because its NIC is 10% as fast as mirror.fcix.net

This brings us to five micro mirrors plus our original heavy iron mirror.fcix.net server online.

This is, predictably, getting out of hand.

It's also getting kind of funny because we have push mirroring set up between MFN and the micro mirrors.

So even though the update script on the micro mirrors has a 200Mbps rate limit, when something like Fedora updates that we have on all the micro mirrors, it causes an extra 1Gbps of rsync traffic just from our own rsync clients.

And I'm packing up a sixth micro mirror tonight, so this problem is still getting worse.

@kwf Maybe with nginx as a caching „frontend“ to the real mirror? Just allocate enough disk to the cache and a long expiry period?

@falk the problem is you don't want a long expiry for the main metadata index, and thin front ends that lose connectivity to the backend server starts getting into a lot of strange failure modes, so we decided to stick to stand alone mirrors

@kwf Oooh, didn’t think about the index files. And you’re right about the failure modes (valid index files with packages missing..)

Even with really small caches (on the order of 10GB-20GB) the cache hit rate is extremely high, but every time anyone tries to build these overly complex mirror setups, they inevitably end up in some weird split brain scenario that breaks things.

We want each to be a functionally independent mirror that meets distro's expectations about how it behaves and when they should disqualify it and take it out of the rotation.

@kwf I think that says more about the web browser than anything else.

@smitty You're going to have to set up a mirror all on your own buddy. Sorry

@kwf I’m trying to get more ham radio related projects on my systems. Make use of that AMPR network.

@smitty Definitely a more _interesting_ use of your network than pumping out Linux updates.

I'm just looking for idle capacity out in the world and shave off a little bit to be useful

@kwf I don't have any opinion on the stats. I just wanted to say this has been a fascinating thread and I really appreciate that you've been posting it over here on mastodon.

@kwf @zbrown

You can add FOSS Software Distribution Infrastructure Sausage Maker to your long list of courtly titles.

@kwf depending on the analysis you want to do, negative binomial might be better than Poisson (cf. johndcook.com/blog/2009/11/03/ and links from there). It isn't memoryless, so recursively reasoning about request counts gets harder, but it might be better as a distribution for a predictive model (because the variance and mean are no longer the same parameter, and you can think of it as a mixture of Poisson distributions). You could even consider en.wikipedia.org/wiki/Conway%E, as suggested in the comment.

@kwf it sounds like one thing you might gain out of this project (just looking at your thread here) is factoring apart the load so that the more popular subset of files is more widely available on a custom CDN of sorts. Depending on the over- or underdispersion of the distribution of HTTP requests, a simple model using the Poisson distribution may be more or less suitable.

@kwf It isn't clear to me, however, if you really need a super clever model for the overall distribution of HTTP requests. Maybe that's so because the right model would help you factor the distribution in a way that shows you how to break things apart onto micro mirrors.

This leads to a question: maybe once you have split things up onto micro mirrors, the load on each mirror would be more Poisson-like, or exhibit underdispersion. What would you expect or want to see?

@trurl I have no idea what I'm looking for.

We're focusing on the operational and political problems for micro mirrors. Numeric metrics for success criteria for them is a whole second ballgame.

@kwf how long does it usually take for traffic to ramp up on a new mirror?

@nicoduck Depends on the project.

Fedora stuff is pretty good since their portal is self-serve, so a few hours. Most projects are 2 days to 2 weeks.

Debian is the high end at 1-5 months to get your mirror added to the load balancer.

@kwf also tcp is too slow, the protocol too obnoxious sooooooo udp void screaming it is!

Its a lot faster now 😜

Show newer
Show newer

@kwf I like this concept and it has merit, especially if you have a stack of them and round robin the load across them, that will make up for slower port speed and other internals. What OS are you running on these and the big iron?

@Tubsta our plan was to try and get every one of these on a different network and in a different building.

Some projects support weights per mirror, and for those that don't we're using the argument that a 1G mirror with 4 projects is comparable to a 10G mirror with 40 projects.

The big iron is running Debian, but I used the mm as an opportunity to force myself to use Alma for a project.

@kwf if you are looking for others to mirror, the BSD projects would be nice to add. One of our large edu mirrors in Australia is what I am looking at for content inspiration. Though it looks like you get inserted into projects GeoCDN so users don't have to select your mirror??

@Tubsta FreeBSD doesn't seem to use community mirrors, so we weren't planning on mirroring them.

@kwf No, and it is quite frustrating for #FreeBSD users, especially in Australia. The other BSD projects (OpenBSD, NetBSD and DragonflyBSD) all have traditional mirror configurations.

@kwf I think if AS have the capacity to run some sort of mirrors (big or small) the internet would be a much better place. I'm in the process of mapping something out for our network, but really still in the thought stages.

@kwf what's the mechanism that handles 404s on the micro mirrors? is it all on the client to try another mirror (and it's that not percieved as "icky" by those in charge?), or is there some magic "these files are available in a micro mirror" flag in the manifests, paired with a second list of mirrors?.. 🤔

@attie it depends on the project.

For projects like epel, Manjaro, alma, I'm hosting the whole project locally.

For OpenSuse, they track what files are available on my mirror and clients then start every file download by querying OpenSuse for what mirror to get redirected to.

Sign in to participate in the conversation
A Social Front Organization

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!