The problem I have always had when building elaborate home server setups is the "set it and forget it" nature of the systems I've installed bites me in the ass. Since it's not my full-time job to manage these systems, I'm really not familiar with them the way I might be with the systems I manage at work. These systems cruise along for years, and when something finally does go belly-up, I can't remember how I set it up in the first place. Now I have a giant chore looming over me, ruining a perfectly good weekend.
These days, I design everything for home with extreme simplicity coupled with detailed documentation on how I set things up.
Docker has helped tremendously, since you can essentially use an out-of-the-box Linux distro with docker installed, and you don't really have to install anything else on the hardware. Then if at all possible, I use standard docker images provided by the software developer with no modifications (maybe some small tweaks in a docker-compose file to map to local resources).
Anyway, my advice is to keep the number of customizations to a bare minimum, minimize the number of moving parts in your home solutions, document everything you do (starting with installing the OS all the way through configuring your applications), capture as much of the configuration as you can in declarative formats (like docker compose files), back up all your data, and just as importantly, back up every single configuration file.
The author focuses the entire blog post on remote third party services that are alternatives to popular third party services financed by data collection as a "business model". IMO, the single most important component of a home network is not any piece of the hardware/software outside the home that the third parties may control, it is the internet gateway in the home. Routers were the most important computers at the dawn of the internet, and IMO they still are the most important computers today. If the internet gateway in the home is ignored as a point of control,^1 then IMO all bets are off.
A significant amount of data collection by third parties can be eliminated or reduced by retaining control over the internet gateway. Arguably this amount is even greater than what can be affected by simply switching to using carefully selected alternative third parties. IMO, it is a mistake to believe that one can reliably eliminate/reduce data collection simply by choosing the "right" third parties. Whack-A-Mole, cat-and-mouse, whatever the term we use, this is a game the user cannot win. Third parties providing "services" over the internet are outside the user's control. For worse not better, they are subjected to market forces that drive them to collect as much user data as they can get away with.
Regardless of these privacy-destructive market forces, it is still possible to build decent routers from BSD project source code and inexpensive hardware. IMO, this is time well spent.
1. Control by the user
I wonder how much a gateway router can do here.
Most of the data passing it are encrypted: https, SSH.
Cutting off the phone-home requests is best done on respective devices: you can run firewalls on most desktops and laptops, and even phones. Π hones often go online via GSM or LTE, without passing through the home router.
While a proxy like pihole can be helpful sometimes, cutting off tracking and ads is done best by browser extensions and by using open-source clients, where available.
The best the home router should do is to not be vulnerable to exploits, and otherwise up-to-date, and fully under the owner's control That's why my home router runs openwrt.
"I wonder how much a gateway router can do here."
"... cutting off tracking and ads is best done by browser extensions ..."
What if the browser vendor, who is also a data collector, requires user to log in or otherwise identify herself before she can use extensions.
A home "gateway" is a computer running a kernel with IP forwarding enabled that is being used as the point of egress from the home network to the internet. That is a broad definition and allows for much creativity. That is what I mean by the term "gateway". As such, a gateway can, both in theory and in practice, do anything/nothing that "desktops and laptops, and even phones" can do. Relying solely on pre-configured "limited/special purpose" OS projects as a replacement for DIY and creativity in setting up a gateway was not what I had in mind, but is certainly an option amongst many others.
Okay. How can we fix this? I'm dealing with it right now and this space is so hard -- likely somewhat deliberately so. I'm a 20+ year Linux user trying to get a single home network with multiple ISPs going and it just seems way harder than it ought to be; i.e. -- not that every bit of software needs to be idiot-proof, but this iptables/pfSense/netplan etc etc universe just feels downright hostile to the aspiring home user.
It is.
Multi-wan is easier with appliances. I used pfSense over the last 12 years or so with multi-wan on and off (currently off). I've run pfSense in a kvm VM, and you can do multi-wan with this. Though I generally recommend dedicated NICs for the WANs and LAN.
I've looked at the linux based appliances (as late as last week) and only clearos or openwrt supported multi-wan. I could be wrong (I'd like to be as pfSense/OPNsense are FreeBSD based, and that comes with, sadly, huge amounts of baggage, limited hardware support, etc.). I'll likely be looking at that package as a potential replacement for the pfSense system, though if clearos can't handle what I need, OPNsense is like pfSense, but with far less baggage.
If you don't mind tinkering, you might be able to use mwan3[1].
If you prefer OpenWRT, you can look at running it in a VM[2] along with mwan3.
Have you checked out opnsense? It features multi-WAN. I've also had a few acquaintances say they prefer it substantially over pfsense
I am surprised too, it trully is home computer, connected to all computers inside and outside, and yet usual routers are cheap and dumb.
Your definition makes routers sound pretty smart, not dumb?
Cheap, yes.
Sadly many who want to control the gateway with something safe and open fall in the trap that is pfsense.
What about it is a trap? Do you feel the same way about opnsense?
These systems cruise along for years, and when something finally does go belly-up, I can't remember how I set it up in the first place.
This happened a few times to me over the years and then I was lucky enough to go on a packer/terraform course.
Now everything is scripted and stored in git. A Gitlab job rebuilds the VMs from scratch every two weeks to include the latest bugfixes and updates.
It was a lot of work at first but actually most of it was a learning experience.
Now you have N problems...
What happens when those images are not available, terraform/packer change APIs, etc?
Yep. This will cruise along longer than the parent's solution, but when it breaks, you'll be starting over all of the original services from scratch plus the management system you had built once to manage them.
My solution is Kubernetes. Everything's configured in YAML files. The solution to all those problems is... change fields in YAML files.
Of course, you need to figure out what you need to change and why, but you'll never not need to do this, if you're rolling your own infra. K8s allows you to roll a lot more of the contextual stuff into the system.
You can store packages, cache images, and freeze versions of things like Packer/tf.
The solution is keeping a local mirror of all images and artifacts, and version pinning for stability (along with a periodic revision of version numbers to the latest stable version).
Oh and don't forget that now maybe you make everything work, but in two years time your setup won't be reproducible, because chances are the original images are not available any more, they got deleted from Docker Hub some months after you used them. Yeah, you should update them anyway for security... but the setup itself is not reproducible, and being forced to use the latest version of something, with the new idiosyncrasies it might bring, is not a nice situation to be in when you just want to hurry up and resolve your downtime.
So I guess that's one more thing to worry about it seems, maintaining your own images repository!
Maybe, but when the original docker image is no longer available on docker hub, chances are there will be something better and even easier to setup. And with docker you don't care about installing / uninstalling apps and figuring out where that obscure setting was hidden - all you need is just a stock distro and a bunch of docker-compose.yml files, plus some mounted directories with the actual data.
But a lot of those unofficial docker images are of unknown quality and could easily contain trojans. It's completely different from installing a package from your distro.
Even if so you're still spending say 50% of the original time investment every year or so just maintaining it. Unfortunately your options seem to be "set up once then never touch it again" or "update everything regularly and be at the mercy of everything changing and breaking at random times".
I mean, you should always have a backup of your dependencies (up to reason).
I develop mobile applications, and use SonarType's Nexus repository storage as my primary dependency resolver. Everytime I fetch a new dependency it gets cached.
A monthly script then takes care of clearing out any cached dependencies which are not listed in any tagged version of my applications.
Agree that documentation is key here. Anything you do that is beyond the vanilla "pave the install and plug it in" should be written down.
It doesn't need to be perfect - I have a onenote notebook that has the customizations that I've done to my router (static IP leases and edits to /etc/config/network), and some helper docs for a local Zabbix install in docker that I have. I recently how to migrate a database from one docker image to another and there is no way I would remember how to do that for the next time, so I wrote everything I learned down.
Just a simple copy/paste and some explanatory text is usually good enough. Anything more complex (e.g., mirroring config files in github) still (IMO) needs enough bootstrap documentation because unless you're working with it daily you're going to forget how your stuff works.
Additionally a part of my brain is worried that if I get hit by a bus my wife/kids will have a hell of a time figuring out what I did to the network. Onenote won't help them there but I haven't figured out the best way of dealing with this.
(I recognize the irony in a "I'll host it myself" post in storing stuff in onedrive with onenote but oh well)
Just to throw more products at the wall, I've been using Bookstack[0] for the same sort of documentation.
Besides being relatively lightweight and simple to setup, out-of-the-box draw.io integration is nice. Makes diagramming networks and other things dead simple. And I know "dead simple" means I'm infinitely more likely to actually do it.
I set up a folder for notes that shares across my network using SyncThing and is backed up with a FreeNAS box.
That folder is just a collection of markdown files for each program / system and when I save on one device it updates the documentation on them all.
I use Atom to view and edit them on my Linux machines and a markdown editor app on my phone. This allows me to search across the notes too.
I've had this fairly simple, free, open source setup for years with no problems.
I also started doing something similar via org-files, git, emacs, and Working Copy. It has worked pretty well, though Working Copy (the iOS git client) was buggier than I expected (but they have a great developer and support). My network isn't very good, or I'd just use emacs on iOS via SSH via Blink.
Interesting, what markdown editor do you use on your phone?
I work on trying to script each install. So if I need to repave, I have a documented, working script, and the source bits to work with.
I've preferred VMs for functional appliances for a while now. I like the isolation compared to containers. Though YMMV.
Right now, the hardest migration I have is my mail system, which makes use of a fairly powerful pipeline of filters in various postfix connected services. Its not fragile, but it is hard to debug.
I host it myself, as the core thesis of the article pointed out, you can be deplatformed, for any reason, with no recourse. And if you lose your mail, you are probably in a world of hurt.
The one thing I am concerned about is long term backup. I need a cold storage capability of a few 10s of TB, that won't blow up my costs too badly. Likely the best route will be a pair of servers at different DCs, running minio or similar behind a VPN that I can rsync to every now and then. Or same servers with zfs and zfs send/recv.
Thinking about this, but still not sure what to do.
The diagram alone is more than enough of an argument to dissuade me from giving this a shot right now - it's simply too complicated and too much to manage for the amount of time I can dedicate to it.
BUT - I'm really thankful for people who keep posting and sharing these sorts of projects; they're the ones iterating the process for the rest of us who need something a bit more turn-key.
I'm excited to see this eventually result in something like the following:
- Standard / Easy to update containerized setup.
- Out of the box multi-location syncs (e.g. home, VPS, etc.)
- Takes 5 minutes to configure/add new locations
I want this to be as easy as adding a new AP to my mesh wifi system at home: plug it in, open the app, name the AP, and click "Done".
(Edit - formatting)
I think do a little at a time and keep at it. Over time it adds up.
At sometime you will hit something interesting: Personal Sovereignty.
I've seen other folks hit this in weird ways.
My friend started working on cars with his buddy. They finally got to an old vehicle they took all the way apart and put it together. He had gotten to the point where he could pull the engine and put it on a stand, weld things, paint, redo the wiring harness.
I remember one day I went and looked at it and he sort of casually said, "I can do anything".
Anyway, I think the diagram says something else to me. It says he understands what his setup does enough to show it/explain it to someone else.
I had this with my bicycle at some point -- learning to fix and tweak oneself without having to go to a mechanic was eye-opening. Reminds me of the core premises in Zen and the Art of Motorcycle Maintenance.
I think the diagram gives a skewed view of how hard this actually is.
I run a very similar setup only my VPS is only a proxy for my home server and it requires very little maintenance. I run everything with docker-compose and I haven't had to work on my setup at all this year and only about 8 hours in 2020 to setup the Wireguard network to replace the ssh tunnels I was using previously for VPS -> server communications.
At the end of the day YMMV and use what you are comfortable with, but it's not as crazy undertaking as it sounds.
Yes, and many popular applications are prepackaged as one-click apps by cloud providers like Vultr [1] or Digital Ocean [2].
[1]: https://www.vultr.com/features/one-click-apps/
[2]: https://marketplace.digitalocean.com
You can also enable automatic backups for your servers.
Somewhat OT, but never realized how expensive those cloud instances are. For comparison, I pay $4.95/month (billed annually) for a KVM VPS with 2 Ghz, 2 GB RAM, 40gb SSD, 400 GB HDD in the Netherlands. That seems a lot better for selfhosting where you probably want more raw storage than more SSD space.
I went down an almost identical path/plan, but then stopped due to corruption concerns with doing the VPS / home sync the way that I wanted without a NAS in the middle managing the thing. Itβs still possible, but it explodes the complexity.
One of the big things I wanted to accomplish was low cost and easy to integrate / recover from for family in case of bus-factor.
I didnβt expect to compete with the major cloud providers on cost, but the architecture I was dreaming of just wasnβt quite feasible even though itβs tantalizingly close...basically, all the benefits of a p2p internal network with all the convenience of NextCloud and all the export-ability of βjust copy all these files to a new disk or cloud provider.β
Itβs so close, thereβs just always some bottleneck: home upload is too slow, cold cloud storage too hard to integrate with / cache, architecture requires too much maintenance, or similar.
I think NextCloud is very close for personal use, if only there was a plug and play p2p backend datastore / cache backed by plug and play immutable cold storage that could pick up new entries from the p2p layer.
There is a cryptocurrency called siacoin. It offers cloud storage and there exist a nextcloud plugin for it to integrate it as a storage backend. I have some plans on trying this setup. What do you think?
https://nextcloud.com/blog/introducing-cloud-storage-in-the-...
The technical language in this makes me ditto the first comment that this is too much for many people out there like myself
You are absolutely right, if you are not familiar with docker-compose, ssh tunnels, wireguard, etc... it will take more time to setup, that being said as far as maintenance go you will probably have a similar experience.
Most of my setup was done through SSH during boring classes in college so I had plenty of time to read documentation and figure out new tools.
After reading through it all, I think this is more a condemnation of the author's diagram (or at least their decision to put that particular one up-front), than of their process in general, nor the challenge.
Breakdown of (my) issues with the diagram:
- author's interaction with each device is explicitly included, adding unnecessary noise
- "partial" and "full" real-time sync are shown as separate processes, whereas there's no obvious need to differentiate them in such a high-level overview
- devices with "partial" and "full" sync (see above) are colour-coded differently; again differentiation unnecessary
- including onsite & off-site backups in the same diagram is cool but would probably be nicer living in a dedicated backup diagram for better focus
Here's a simplified version of the same diagram:
βββββββββββββββββββββ
βββββββββββββ βββββββββββββββββΊ β
β nextcloud β β β phone β
β music β β βββββββββββββββββββββ€
β videos ββββββrealtime syncβββββββββββββββββΊ β
β photos β β β laptop β
β docs β β βββββββββββββββββββββ€
β calendar β β β β
βββββββββββββ€ β β β
β β βββββββββββββββββΊ desktop β
β crm βββββββ β β β
β β β β β β
βββββββββββββ€ β β βββββββββββββββββββββ€
β β β β β β
β analytics βββββββ€ β β β
β β β βββββββββββββββββΊ β
βββββββββββββ€ β β β
β β β β β
β web ββββ daily syncβββββββββββββββββββββΊ β
β β β synology β
βββββββββββββ€ β β β
β β β β β
β git βββββββ€ β β
β β β β β
βββββββββββββ€ β βββββββββββββββββββββ
β β β
β devtools βββββββ
β β
βββββββββββββThat is great ASCII viz ! Did you do it purely by hand ? I often need to but give up...
For those wondering, I found some tools : Asciio(Linux), Monodraw(Mac), asciiflow.com(Web)
Some good suggestions in the sibling comment. This one is from asciiflow, but I haven't tried others; not sure if they're better.
Sweet diagram! Did you use Monodraw to draw it? Or something else?
Doesn't look very simplified on mobile, that's for sure.
As much as I love the way HN's design goes against many trending "UX" conventions, I think the long-time refusal to put in very very basic simple fixes like this one is bizarre.
The messed up presentation on mobile is 100% a mobile bug, for which there is a very easy fix on the dev side, and no good workaround on the commenter side.
Β―\_(γ)_/Β―
Nice, maybe I can stop being gravedigger now.
i've actually daydreamed about starting a computing appliance company that would make a variety of services plug and play for consumers and small businesses, from email to storage, to networking, to security, and to smart home. it's actually the direction apple is headed, but they're encumbered by the innovator's dilemma, which leaves an opportunity for an upstart. google and facebook are similarly too focused on adtech, while amazon on commerce, to lock up this market yet.
I've wanted to make something like this too. After years of iteration, my self hosted setup is now completely automated and the automation itself is super simple and organized. It would be pretty simple to setup a simple web app that allows users to simply apply the same automation steps onto their own VPSs. Hardest part would be setting up a secure process for managing user secrets to be honest.
Business wise, I'm not sure I'd be willing to pay for just the automation... in reality you don't use it very often. Could be interesting to try (re)selling tightly knit VPSs, more advanced automation features or support.
I think this solution still captures the self hosted ideology while also providing some cool value. I see people reinventing the wheel all the time while trying to automate self hosted processes... but then again maybe that's why we do it, we like the adventure!
I'd be willing to pay for a little consulting time about how to set up everything.
It would be fun stuff to build, but I feel like you'd struggle to make money. Google and Amazon can afford to give away the hardware, and they can smuggle their ecosystem into your house as a thermostat or a smart speaker or a phone app, or whatever.
Like, how do you persuade the audience of enthusiasts (think: Unifi buyers) to pay for a subscription to managed software they run on their own computers, raspis, whatever? I would probably spend $10/mo on something like that, but much above that and you'd be fighting against the armchair commentary of users who won't appreciate the effort that goes into stability and will basically have a "no wireless, less space than a Nomad, lame" attitude.
Hardware sales. People will pay for the convenience of a device that works out of the box with minimal setup.
On the software side, integrate tightly with your own subscription services (offsite backups, VPS, etc) to upsell to those who want that, and win over the enthusiast crowd by making it possible to host your own alternatives to those services with a little technical know-how.
Open source most components to appeal to enthusiasts, but keep the secret sauce that makes everything seamless and easy to use "source available" so you don't unintentionally turn your core business into a commodity.
Seems viable to me.
there are actually tons of companies in this space already making money (e.g., wyze), but itβs highly fragmented and none have a unified vision or product strategy yet. so yes, theyβre vulnerable to the behemoths right now, but those dynamics arenβt locked in yet.
itβs mostly tough because of the high upfront capital costs (manufacturing, r&d, and marketing). people still talk fondly about discontinued apple routers and what nest could have been as an independent venture, for example.
Maybe it doesnβt need to make lots of money? Just gotta build a strong community.
Having spent the past year frustratingly trying to build these types of things in AWS and spending too much money with mistakes I'd say there is a huge opportunity here. SMB or NFS as a service for example.
https://www.rsync.net/ has been selling this solution for years. Price competitive these days. Not affiliated, just looked at it recently and thought it was extremely cool.
qnap and Synology basically already cover this market
Ok, Iβm SUPER into self hosting, but this article? No way. 1) Duck out isnβt a thing, just stop it. 2) Half the articles cited as examples of corporate abuse were later revealed to be mistakes by the user or easily avoidable pitfalls. 3) Self hosting still requires trust (software youβre running, DNS, domains, ISP, etc...) The line of who to trust and how far is a tough one to answer, even for the informed.
How I solved it: 1) I use well vetted cloud services for things that are difficult/impossible to self host or have a low impact if lost. (Email, domains, github, etc...) 2) I self host things that are absolutely critical with cloud backups. (Files, Photos, code, notes, etc..)
I am perpetually confused about why people think that self-hosting on a VPS solves their privacy and security problems. While I'm sure there are controls in place at reputable VPS providers, it wouldn't be too difficult for them to grab absolutely anything they want. Even disk encryption doesn't save you. You're in a VM, they can watch the memory if they need to.
Using a VPS can also make you more identifiable. Your traffic isn't as easily lost in the noise. The worst thing that I know of people doing is using a VPS for VPN tunneling. While it can have its uses, privacy certainly isn't one of them. You're the only one connecting into it and the only traffic coming out of it.
So I agree with your sentiment, your details are a little off. βit wouldn't be too difficult for them to grab absolutely anything they want. Even disk encryption doesn't save you. You're in a VM, they can watch the memory if they need to.β It would be difficult because youβd have to have host access. VM disk encryption is now tied into an HSM or TPM these days, host access wouldnβt help. As for memory, that is now usually encrypted, so no dice there either. The security of a big name public VPS is astoundingly better than what you can do yourself.
βUsing a VPS can make you more identifiableβ I think you have a problem of βthreat modelβ here. Youβre mixing up hiding against hackers, governments, etc and just lumping it under βprivacy and securityβ Using a VPS isnβt going to make you more identifiable to google, because youβre not using google now. Using a VPN isnβt going to make you more identifiable to your ISP, because all they can see is that you have a VPN up. Why not use a VPS for VPN? Well youβre only right it would suck if your threat model includes governments or hostile actors, me hiding from my ISP or on a public Wi-Fi? Not a problem.
You conflate a few ideas and threat models.
Security = The ability to not have your stuff accessed or changed. Privacy = The ability to not have your stuff seen. Anonymity = The ability to not have your stuff linked back to you. Threat model = Who are you protecting yourself from? E.g. The steps I take to not get hacked by the NSA are going to be different then the steps I use to make comments on 4chan or whatever are different than the steps I take to use public Wi-Fi.
Ref: I work for Amazon AWS, my opinions are my own insane ramblings.
> Encryption tied to TPM
Common on laptops, but I wouldnβt assume that for systems/SANs in a data center, much less their virtual disks. Would love to be corrected.
> It would be difficult because youβd have to have host access.
Which AWS has, by definition.
> VM disk encryption is now tied into an HSM or TPM these days, host access wouldnβt help.
Are you passing all of the data through the TPM? If no: you still need to keep the key in memory somewhere, the TPM is just used for offline storage. If yes: the TPM, and the communication with it, is still under AWS' control.
> As for memory, that is now usually encrypted, so no dice there either.
Still need to keep the key somewhere, so same concern as for disk encryption. Except I can pretty much guarantee you're not putting the TPM on the memory's critical path, so...
> The security of a big name public VPS is astoundingly better than what you can do yourself.
Feel free to back such claims up in the future. Because right now this seems to be as false as the rest of your post.
> Using a VPS isnβt going to make you more identifiable to google, because youβre not using google now.
What? It certainly won't make you less identifiable either.
> Using a VPN isnβt going to make you more identifiable to your ISP, because all they can see is that you have a VPN up.
Your VPN provider, on the other hand, can now see all of the traffic, where before they couldn't. So the question is ultimately whether you trust your ISP or VPN provider more.
> Why not use a VPS for VPN? Well youβre only right it would suck if your threat model includes governments or hostile actors, me hiding from my ISP
Sure, if you trust the Amazon over your ISP that makes perfect sense. Then again, this is the Amazon that seems to love forcing their employees to piss in bottles, and is on a huge misinformation campaign against treating their employees properly.
That seems like an upstanding place with great leadership.
> or on a public Wi-Fi? Not a problem.
Makes some sense, but it wouldn't really give you much more than hosting the VPN at home. (Well, you'd still have to do the same calculus here for home ISP vs Amazon.)
> You conflate a few ideas and threat models.
Pot, meet kettle.
> Ref: I work for Amazon AWS, my opinions are my own insane ramblings.
Good to know that AWS employees are either clueless about their own offerings, or deliberately spreading misinformation.
Seems like a place that I'd love to trust...
VPS doesn't solve privacy and security, it solves getting locked out of your account because some algorithm decided you were peddling child porn.
If you want privacy and security and you don't trust your provider, then you have to build your own hardware and compile everything you run on it from vetted source, including your kernel. You can do it, but most people decide that on balance its better to trust someone.
VPS doesn't solve privacy and security, it solves getting locked out of your account
Does it really? It just seems like instead of trusting a big company that everyone knows, you trust a smaller company that not everyone knows that involves more work for you.
I'm pretty sure I've seen articles on HN where VPS companies (maybe DO?) have kicked people off their infrastructure with zero notice. So, not at all different from being locked out of Apple/Google/Amazon.
Howso? The VPS can shut you down as well? You might say the migration path is easier, but there will be a weak link somewhere. Even if you put up a datacenter in the basement you need to connect to the internet somehow which can be taken away.
well DO decided to lock me out of my account that I had for years because they decided that I'm a fraud and had to deal with their terrible customer service
With rclone you can encrypt data locally while uploading. This allows you to host everything from home and use the cloud only for backups, basically end-to-end encrypted.
I always think of it as β how many examples of "I got locked out of all my data!" would there be if billions of people start following the author's advice? Definitely more than the ~5 they list (whether that is user error or actually Apple/Google/Amazon's fault).
the 'duck it out' thing really made me cringe. we really need to get away from that idea of having a searching verb that is tied to the popular search engines of the day. i use duckduckgo but it might not be around in 10 or 20 years or there might be something better by then so its pointless to expect everyone to keep learning new verbs all the time.
> 1) Duck out isnβt a thing, just stop it.
Well, 1) sorry but you don't get to decide this, 2) how would anything ever become a thing if people were not allowed to invent new things?
- I'm not favoring the term just opposing your commanding
I do get to decide, as a member of popular culture, I get a say. (So do you!) And I say a resounding βno!β To duck it out.
Hi, I'm the author,
Thank you all so much for your comments. I didn't expect this will be this high on HN. I'm aware there are more simple solutions for self-hosting, even partially. I'm also aware that my setup is not perfect - that's why this post was created. I was hoping to get some feedback. Not from that many of you, but some friends. :) Ask me anything you like, I'll try to answer every question.
I really enjoyed the read, thank you!
You're system architecture is very clean and understandable. I spend a lot of time marveling at the beautiful but often overly complex diagrams on r/homelabs, which more often than not dissuade me from actually having a go at it. Your explanation made it feel very approachable.
That being said... > Some people think Iβm weird because Iβm using a personal CRM.
This strikes me as incredibly...German, hahaha! Is there any reason your Contacts solution doesn't/can't provide this functionality?
Heh, I'm living and working in Germany, but I'm not German, still (or yet). :)
Regarding CRM and Contacts - I could possibly fit all the info in the 'about' field for a particular contact, but Monica offers me so much more. With Monica, I can structure the data for a contact in a better way. That 'better way' and the feature set of Monica is why I'm using it.
I mean, I'm sold. I guess the biggest question is could your Contacts be pulled from Monica so that things like messages and phone apps pull that info?
The article sounds like you enjoyed building the system you put together, and I think that's probably a seriously undervalued aspect of why someone might take on this kind of work.
Thanks. It is kind of a show-off of what I built for myself. That's why I put that little disclaimer into the post, that it's not for everyone. I do have strong opinions about a lot of the things regarding where I hold my data, but I don't want to strong-arm anyone in doing the same thing.
K-9 is not ugly if you use a more recent release. In F-Droid, go to the app page and have a look at the Versions that are available.
>Ask me anything you like, I'll try to answer every question.
What's stopping you from hosting at home?
While I admit that I often feel claustrophobic with only ~35-40 Mbps of usable bandwidth, my power costs for several orders of magnitude more usable storage+cpu are in line with what you're paying for VPS right now.
>I was hoping to get some feedback.
Do you run any additional layers of security of top of NextCloud? From something simple like requiring SNI to ward off casual scanning activity, or more advanced like a WAF layer?
I ask because I've been hesitant to trust my whole digital life to something that doesn't have a full-time paid security staff.
"for purely private use, I wouldnβt opt for AWS even if I had to choose now. Iβll leave it at that"
I will elaborate: I started out with AWS several years ago. I could never work out how they calculated my bill, and had more than one >$100 shocks for hosting my personal services.
I moved to DO and Vultr (stayed with DO for no real reason) and so shut everything down on AWS.
But I still got a $0.50 monthly charge on my credit card. I tried emailing - no response, totally ghosted.
I went through the control panel several times - it is/was a huge mess, obscure by policy obviously - and finally in some far distant corner found something still turned on. I did not understand what it was at the time and can recall no details, but I turned it off with great relief.
A week later I got a email from AWS (!) saying that I had made a error and they had helpfully turned the whatever it was back on...
So I continued to donate $0.50 a month to Amazon until I cancelled the credit card for other reasons. (it would cost $10 for the bank to even think about blocking them)
These days I will crawl over cut glass not to do business with that organised bunch of thieves called Amazon.
This inspired me to finally track down the $0.XX monthly donation Iβve been making to AWS. Through the billing dashboard [1] I discovered a zombie static site I set up ages ago with S3 and Route 53.
[1]: https://console.aws.amazon.com/billing/home#/bills
(Edit: I found the S3 bucket, but mysteriously no hosted zone to account for the Route 53 bills Β―\_(γ)_/Β―)
> I went through the control panel several times - it is/was a huge mess, obscure by policy obviously - and finally in some far distant corner found something still turned on. I did not understand what it was at the time and can recall no details, but I turned it off with great relief.
Using IAC (Terraform) would solve this in an instant: "terraform destroy". Done.
Oh god Iβm not the only one. I tried to host my own personal projects on AWS but could not for the life of me ever get the resources to be turned off.
I am in the same boat, I'm not personnally using AWS anymore but i'm still charged x.1x$ a month. It's not worth it enough to track the charge down and I might just delete my account without forgetting to change my email adress beforehand (since you can't reuse a deleted account email).
Y'know what, Although I'm currently self hosting my email, my websites, my storage, my SQL, my Active Directory etc., I'm also in the process of migrating the whole lot to Azure and/or independent hosting.
Why? It's just too much hassle these days; I want my down-time to be no longer dictated by my infrastructure. I don't want to have to spend off-work hours making sure my boxes are patched, my disks are raided, my offsite-backups are scheduled, and my web/email services are running. I just want it all to work, and when it doesn't, I want to be able to complain to someone else and make it their problem to fix it.
For my data, I'll probably still have an on-site backup, but everything else can just live in the cloud, and I'll start sleeping better, due to less stress about keeping it all secure and running.
I stopped self-hosting as soon as I moved out of university. Back in university I had a gigabit uplink and only 1 power outage in 7 years of my PhD. Now in the middle of Silicon Valley I have only 15-20 mbps and have had 3 power outages in 1 year.
Did you ever receive complaints that your emails are ending up in spam folder for Gmail/Outlook/<other big email provider>?
How about you receiving a lot of spam emails?
Nope. I'm on a static business IP, with DNS all set up correctly. I've also got SPF records set up, but I don't think they get used, as I use my ISPs smarthost for relaying mail through.
I do get a lot of incoming spam though, but I think that's more to do with some of my email addresses being over 20 years old.
Not OP, but hosting my own mail as well (postfix, dovecot, spamassassin) for six seven now. Had one issue with outgoing mails to Microsoft (hotmail I think) bouncing. The IP of my dedicated server had been blacklisted from before it was used by me, but I got them to remove it. No other issues I can think of.
I'm getting about 1-2 spam mails a month delivered to my inbox, usually french SEO spam. Not worth investigating.
Because apparently you don't need to keep stuff running in Azure patched, I guess?
Microsoft take care of patching Windows VMs, and Exchange is a service, so not your own boxes.
The author of this post cites $55/month as his cost. This is wrong. If it takes him, say, two hours a month to maintain (probably conservative) then if you value those hours at $100/hour the actual cost is $255/month.
The reality is probably in excess of $1000/month. This only makes sense for people who have an abundance of spare time, and that's pretty rare these days.
Free software for DIY hosting like this is "free as in piano." Like a huge piano sitting on the street with a sign that says "free piano," it is actually not free at all when you factor in the hidden costs.
Well that is only if you look at software/ops as a 100% commercial undertaking. It is not.
One way to understand why people self-host is to understand why people self-cook their food. It takes significantly longer to prepare food (get raw material, cut, cook) than ordering it. People still do it for $reasons - some find it fun, some find it cheaper, some find it nice to be able to control the taste, some find it more healthy to know whats going on their plate, and so on.
Only concentrating on the dollar cost is too narrow a view, IMO.
Well that is only if you look at software/ops as a 100% commercial undertaking. It is not.
Your time is only free if it is worth nothing. My time is very valuable. I happily pay other people and companies to do things for me because I'd rather have the time.
I think it's just a normal part of life. When you're young, you have more time than money. When you're old, you have more money than time.
Far fewer people cook their own food for fun versus preparation time not being the only constraint: cost, availability, health, transparency (of prep & ingredients), dependency etc
It's common for people to delude themselves into thinking they haven't wasted their time by convincing themselves they did it for fun (or the lols, or whatever) - I'd say the difference is whether they knew )or stated) this upfront, or only after they failed, or had a better solution pointed out to them.
2nd most common also: at least I learnt something / gained xp - which is fair enough, if true.
> Only concentrating on the dollar cost is too narrow a view
Not if you convey other resources/constraints in dollars. Just attach a dollar-value to your free time, perhaps with discounts for things with side-benefits.
As a developer, all of the time I spend working on hobby projects (and self-hosting has turned into a hobby) keep me up to date. It's how I learned Kubernetes, it's how I learned Traefik, nginx, and apache before that. It's how I learned how the different packaging and distribution ecosystems work for many different languages and frameworks. I intentionally host and backup some things on AWS, GCloud, and Azure. Other things live on Intel NUCs. I administer a GSuite for the family. The list goes on and on. It gives you the chance to experiment with new tools and toys that you're unlikely to use at your current job.
My long-winded point is that all of the things I've picked up have been invaluable to me at work, especially in my time as a contractor where I would be switching between many different stacks. If you want to find a "true" cost for self-hosting, you need to also treat it as training.
I don't really believe it's any different from say, a woodworker that has a shop at home. They may spend the workday just doing framing, but odds are good they find the time to make a chair, a bird house, something to keep their skills sharp.
As a developer, all of the time I spend working on hobby projects (and self-hosting has turned into a hobby) keep me up to date.
True for some things, like things that are not at all related to your work. But your job should be actively trying to make you better at your job, and a better person.
Large companies like the one I work for hire outside firms to offer classes to the employees for free, and on company time. If there is a new version of a piece of software that is significantly different from an old one, my company pays for the users to go to training, or to train online. This is very common for products like Office or the Adobe suite. But for some reason, as developers, we too often think that we're supposed to better ourselves on our own dime. If it benefits your current employer, the current employer should chip in.
I used to think this too, which is why I was self-hosting (I'm the OP of this thread), but as I've got older, and my interests have shifted, along with no longer needing to be at the bleeding edge of my skill-set (I leave that stuff to the younglings these days), I found that managing my own infrastructure felt more like a chore than a hobby, more so if it's a 'production' system and not a 'lab' environment.
"Free as in free puppy" is my other favorite metaphor. Free software is a gift to the word, but IMO it's important not to undervalue the time and expertise of operationalizing it.
It's worth remembering that you can get an expensive puppy too. I.e. choosing proprietary software doesn't mean that time and expertise won't be required.
Recent previous discussion at: https://news.ycombinator.com/item?id=26672009 .
Free software was always intended as Libre software or Freedom software, though.
The main concern is autonomy, not economic costs.
I expect you know this already, which is why the puppy analogy sort of fails.
I would argue yes and no here. If those are two hrs where you are not employed making $100, then its $55. If you have to give up 2 hrs of employed time to maintain this, then yes $255.
I love my free time and there is precious little. But I don't think of it as costing ME $100/hr when I wash, dry, and detail my car, especially as I like doing it.
IMO you can get 90% of the utility here (owning your data) with just the NAS and rsync.
1. Don't feed the FAANG
2. Store your SoR media, notes, documents on your own NAS
3. Automate a backup of the NAS, preferably both on and off site (I use rsync from a pi + large disk + cloud blob storage)
I second this, either get a Synology/Qnap NAS or take an old PC with a couple drives and install OpenMediaVault/Freenas/Unraid. All of these platforms have out-of-the-box solutions that mirror most cloud services. I found homelab redit to be great.
If you get the off-the shelf NAS, get one with at least 2GB of ram! Synology is particularly notorious for selling NAS with 512MB(WTF?!) of ram, and then when you try to run a few applications it grinds to a halt.
NAS fails for smartphone integration. Photos should auto upload. Calendar, todos, and contacts need to show up in the usual apps. It needs to be available from remote.
Synology NASes have various apps for syncing mobile devices, such as DS Photo for uploading photos to Photo Station, Synology Drive for more of a Dropbox approach, and MailPlus for contacts, emails, etcs.
https://www.synology.com/en-nz/dsm/feature/photo_station https://www.synology.com/en-nz/dsm/feature/drive https://www.synology.com/en-nz/dsm/feature/mailplus
Syncthing may be used to sync remotely the relevant directories. It's multiplatform and has a Android app too (still not iOS though).
FolderSync is another excellent option that might mesh better with your existing setup.
Self-hosted VPN (if you're physically away from home) + directory sync app (in my case, SyncMe [at the moment; might switch to SyncThing]).
Smartphone integration isn't necessary for everyone (myself included), but I appreciate most people want it.
re 3 Restic is pretty good as you get your data encrypted locally, so it can be used over untrusted storage facilities.
Synology backs up beautifully to the cloud.
The author treats his personal life as a job, with productivity tools and benchmarks. Whatever works for you, but I couldn't live like that.
For some of us, we turn it into a hobby. Only difference is that the technical knowledge and experience gained at work, can also be applied at home. (without a lot of restrictions).
What I meant is that I need some time to _not_ be productive. Like, actively not being productive. Literally wasting time for the sake of getting some peace of mind and true relaxation.
If your personal life is filled with productivity tools and optimizations, at what time in your daily life your are _not_ worried about productivity? If this time is zero, I think it's kind of sad and maybe even unhealthy. It's just my opinion, of course :)
I agree with you on the importance of non-productive time but I've found having my own infrastructure makes my life smoother day to day in exchange for some upfront cost. It's a tricky balance, and as many other commenters have mentioned that initial cost can end up not being so initial - though I think most people who engage in this 'hobby' generally find both the process and the product rewarding.
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.