While I treat my home lab openstack instances as proper cloud instances, specifically they are supposed to be thrown up and torn down not exist for a long time, I don’t like assigning more that one floating ip per tenant-network as most of the instances should never be accessable from my main network.
Basically each tenant network I give one instance a floating ip address and use that as a gateway to all the others. When only using a few instances on the internal that is fine; but that gets a little irritating when I have quite a few instances I need to bounce around.
The main problem is the public key pair needed to access each instance has to be copied onto that gateway server and used to access all the others. Should I have a need to ssh from one of the internal instances to another that public key also has to be copied onto the server(s) I wish to ssh from.
Ok, that is not a problem, it is by design in cloud images, but gets frustrating. Not just having to copy the key to the gateway server as step one but having to copy it to any instance I do a lof of ssh’ing from.
I did add a network route for a tenant subnet to the gateway floating ip-address but network traffic to the internal network only gets as far as the internal address for the gateway server; probably just needs something like ipv4 forwarding enabled in the config; but I want to minimise changes to images.
While any internal instance can easily be accessed from the openstack network server simply by ssh’ing from the correct network namespace on the network node for the tenant network, that ability is supposed to be hidden from customers, so we will not mention that anybody with access to the network node can login to instances on your private network (if they have a correct ssh key, or if you stupidly allow password logons).
The ideal solution would be to just proxy all traffic from any of my desktops via the gateway instance to any of the internal network servers. Hard though it may be to believe the hardest part of that to get working was finding a socks5 client that would work.
My proxy server solution
My proxy solution is not perfect, and cannot be automatically configured because like using the gatway instance as a jumphost to the other internal servers this also requires the public key (the key assigned to the gateway instance only, you can use other keys on other instances in the internal network if desired) copied onto the gateway server. In this case however it is to allow ssh to localhost as ssh logins are still only permitted by the default confguration using keys. Yes you could reconfigure sshd to allow login without keys, but in either case manual configuration would be required on the gateway instance and allowing logins without keys makes it less secure.
Anyway, ssh can act as a proxy server (source document referenced http://www.catonmat.net/blog/linux-socks5-proxy/). Once the public key is copied to the gateway instance ssh can be used to start a proxy server with the command (use your own key of course)
ssh -i ./marks_keypair.pem -N -D 0.0.0.0:1080 localhost
The key point about this solution is that no additional software needs to be installed onto your ‘cloud base’ instance. You can also if you wish use iptables rules to limit what external addressed can connect to the proxy port so it is no less secure than commercial packages. Plus ssh is common across all *nix distributions so this should work with any flavour of linux I choose to use on a gateway instance.
A correctly configured proxy client can then access all internal tenant network instances. This means I only need to copy the one public key to the gateway server and with sshd running as a proxy access the internal instances can all be accessed (it means the key used on the gateway server can be different from user keys added to the instances at instance launch time, user public keys can live on their own workstations as I don’t need to copy them anywhere, and from any workstation with a valid public key I can access all the internal instances without having to logon to the gateway server… just as cloud instances should be accessed :-).
My proxy client solution
As noted above, the hardest part of getting a proxied solution going was finding a working proxy client solution. tsocks is now in the fedora repository (I am using F25 and the version available is tsocks-1.8-0.16.beta5.fc24.x86_64). As it is in the repository that was immediately my preferred solution.
The biggest issue is that the BETA tag definately include the man pages. A “man tsocks.conf” undoubtably shows you what will be supported in a configuration file one day; but attempting to build a configuration file using the man page will get you nowhere.
I found a working configuration via google that got everything working for me. My ‘gateway’ server floating ip-address is 192.168.1.235 and the tenant network range the floating address is attached to is 10.0.3.0/24, and the below configuration in /etc/tsocks.conf allows me to ssh directly into any of the tenant private network machines; client problem solved.
# default server local = 192.168.1.0/255.255.255.0 server = 192.168.1.235 server_type = 5 server_port = 1080 # explicit ranges accessed via specific proxy servers path { server = 192.168.1.235 server_port = 1080 server_type = 5 reaches = 10.0.3.0/255.255.0.0 }
My reasoning for this solution
It is fair to say I was looking at full function dedicated proxy servers for a while, and was looking at dante and srelay. However all products advertising themselves as proxy servers are either commercial or provided as discrete installs… not in distribution repositories. Anything not in a distribution repository requires manually watching for security updates and manual updates when there are.
I am playing with too many things at any given time to even consider all that extra work. I chose a solution where a simple dnf/yum update will keep me up-to-date. Well the images are kept up-to-date, running instances are supposed to be considered temporary in a cloud environment so can look af ter themselves until torn down.
I will use repository provided files wherever possible, this achieves that goal. Also if a utility is not going to be supported by a repository package that utility has no future (unless thay have taken it commercial only in which case 90% of linux users will never use it again).