Any guides on how to host at home? I’m always afraid that opening ports in my home router means taking the heavy risk of being hacked. Does using something like CloudFlare help? I am a complete beginner.
Edit: Thanks for all the great response! They are very helpful.
From what I understand, opening a port isn’t a risk in and of itself — it’s only a risk if the software using the port is insecure! So long as you use reliable software and take care to configure things properly (following through with instructions from a site like ArchWiki or the official documentation helps), you’re good.
CloudFlare is more for DDOS protection, which you almost certainly don’t need . You could always set up DDOS protection later on, if the need ever arises.
I think the safest option is to not host from your home network. If you aren’t up to date on security patches, you could potentially expose a lot of data from an insecure server running inside your network.
There are precautions you can take, like isolating any external facing servers from the rest of your network, for example, but I generally recommend using a hosted service instead.
There are ways to host things from home without opening ports in your router at all, this usually involves running something that calls/tunnels out of your network and back to some service and accepts incoming connections and sends them “baskward” over that connection. Cloudflare offers something called Tunnel, ngrok does something similar (though mainly aimed at development and not production hosting), and you can even host something yourself using something like frp (which is what I use, even for the Lemmy instance I am writing this from).
I haven’t looked too closely at it, but there is an awesome-tunneling page someone put together that goes over these options and more.
Let me know if you want a bit more details on these options or specifics of how I’ve set up frp.
Not OP and I don’t have any specific questions right now, but if you got any general advice on frp you want to share, I’d be happy to read it :-)
Very basically, I set up a
frps
on the public server, and onefrpc
per service I want to have publicly accessible - where thefrpc.ini
defines all the specifics, so I don’t have to touch the server regularly. Is that correct so far?Edit: After typing it out, I came up with a question - what happens when the
frpc
s go offline? Will the server return an error, fallback, or just deny the connection?Yeah, you’re basically on the right track. I do a couple things in a possibly interesting way that you may find useful:
- I run multiple
frps
s on different servers. Haven’t gotten around to setting up a LB in front or automatically removing them from DNS, but doing that sort of thing is the eventual plan. This means running as manyfrpc
s as I havefrps
s. I also haven’t gotten to the point of figuring out what to do if e.g. one service exposed viafrps
is healthy but another is not. It may make sense to run HAProxy in front of it or something… sounds terrible… - I have multiple
frpc.ini
s, they define all of the connection details for a particularfrps
then useincludes = /conf.d/*.ini
to load up whatever services thatfrpc
exposes. - I run
frpc
in docker and use volumes to manage e.g. putting the rightfrpc.ini
and/conf.d/<service>.ini
files in there. - I use QUIC for the communication layer between
frpc
andfrps
using certificates for client authentication. - I run my
frpc
s (one container perfrps
, I’m considering ways to combine them to make it less annoying to deploy) right alongside the service I am exposing remotely, so I run e.g. one for Traefik, one forgogsgiteaforgejo ssh, etc. If you are using docker-compose I would put one (set of)frpc
in that compose file to expose whatever services it has. Similar thought for k8s, I would do sidecar containers as part of your podspec. - If I have more than one instance of a service, such as e.g. running multiple Traefik “ingress” stacks, I run a set of
frpc
s per deployment of that service. - Where possible I use proxy protocol via
proxy_protocol_version = v2
to easily preserve incoming IP address. Traefik supports this natively, which is the most important service to me as most of what I run connects over HTTP(s). - I choose to terminate end-user SSL using Traefik within my homelab, so the full TLS session gets sent as a plain TCP stream. There is support for HTTPS within frp using
plugin = https2http
, but I like my setup better.
As to your question of “what happens when the
frpc
s go offline?”, it depends on service type. I only use services oftype = tcp
andtype = udp
, so can’t speak to anything beyond that with experience.In the case of
type = tcp
yourfrps
you can run multiplefrpc
s and thefrps
will load-balance to them, meaning if you run multiple you should get some level of HA because if one connection breaks it should just use the other, killing any still-open connections to the failedfrpc
. Same thought there as how e.g.cloudflared
using their Tunnels feature makes two connections to two of their datacenters. If there is nothing to handle a particular TCP service on anfrps
I think the connection gets refused, it may even stop listening on the port, but I’m not sure of that.Sadly in the case of
type = udp
thefrps
will only accept onefrpc
connection. I still run multiplefrpc
s, but those particular connections just fail and keep retrying until the “active”frpc
for that udp service dies. I believe this means that if there is nothing to handle a particular UDP service on anfrps
it just drops the packets since there isn’t really a “connection” to kill/refuse/reset, the same thing about stopping listening may apply here as well but I am also unsure in this case.My wishlist for frp is, in no particular order:
frpc
making multiple connections to a serverfrpc
being able to connect to multiple servers- Some sort of native ALPN handling in
frps
, and ability to use a custom ALPN protocol for frp traffic (so I can run client traffic and frp traffic on the same port) frps
support for loadbalancing to multiple UDP via some sort of session tracking, triple-based routing, or something elsefrps
support for clustering or something, so even if onefrps
didn’t have a way to route a service it could talk to anotherfrps
nearby that did- or even support for tiering the idea of “locality”, first it tries on the local machine, then it tries in the same zone/region/etc
- It’d be super neat if there were a way to do something like Cloudflare’s “Keyless SSL”
Overall I am pretty happy with frp, but it seems like it is trying to solve too much (e.g “secret” services, p2p, HTTP, TCP multiplexing). I would love to see something more focused purely TCP/UDP (and maybe TLS/QUIC) edge-ingress emerge and solve a narrower problem space even better. Maybe there is some sort of complex network-level solution with a VPN and routing daemons (BGP?) and firewall/NAT stuff that could do this better, but I really just want a tiny executable and/or container I can run on both ends and have things “just work”.
<pipedream> I want the technology to build something similar to Cloudflare incredibly simply, and for more providers to be able to offer something similar. Maybe there is something that could be built to solve this if they open source Oxy. </pipedream>
Wow, what a rich response. I’ll be able to learn from that for days! Thank you very much!
Especially love the hints about integrating with docker. I often struggle understanding why people like to use it, but this bit about integrating frp here really gave me a little new perspective.
- I run multiple