214
edits
Changes
no edit summary
This page describes how to manage the infra. See [https://vtluug.github.io/rtfm.txt rtfm.txt] for a guide to build it from scratch.
This covers setup of a VM on [[Infrastructure:Meltdown|meltdown]] or [[Infrastructure:Spectre|spectre]] depending on if the service is critical or not. == Infodump (i will clean this up later, promise) == LUUG infrastructure runs on, essentially, four key components:* Hosts* NFS* Authand* out-of-band Ansible & Docker manifests Almost all of our services are hosted in Docker containers across various hosts: on [[Infrastructure:Gibson]], the LLM server, on [[Infrastructure:Sczi]] the web content. 100% of these docker containers have their configuration detailed [https://github.com/vtluug/docker-manifests here]. The entire repository is cloned to /nfs/cistern/docker/apps, and the docker-compose.yml files for each service are ran with the command 'docker compose up -d' while in the service folder. Note the path: /nfs/cistern/docker/apps. Looking at the docker-compose folders & configs, you will notice that the *data* for the container is **never** stored alongside the compose files themselves. Instead, they are stored at /nfs/cistern/docker/data/<insert-service-name>/<etc>. This guide is an NFS (Network File System) mounted path: it exists physically on our NFS server, [[Infrastructure:Dirtycow]] and is mounted over the local network. The implications of this should be clear: *the host install does not actually matter*. If the operating system for e.g. [[Infrastructure:Sczi]] blew up, all one would need to do to bring everything back up is re-create it, install docker, mount the cistern NFS directory (with the date files still intact), set up auth, and start all the containers again. No data is ever lost, because nothing is stored on the host itself: it's all on the NFS share. How do you easily set all that stuff back up again? [https://github.com/vtluug/ansible Ansible]. you can think of ansible as a language designed for defining deployed servers. It uses YAML (.yml), and "roles" are specified for each server. in roles/<server role>/tasks, there exists a list of things needed to set up the server, and in /hosts.cfg there exists a list of servers and which roles they all have. All you need to do to set a server up is run ansible -- it will take care of the rest. You can run it twice, or a million times, to no ill effect: it's designed to be idempotent. Knowing this much, you can re-create [[Infrastructure:Sczi]] and [[Infrastructure:Gibson]], but there are a few remaining things: VM hosts ([[Infrastructure:Meltdown|meltdown]] , [[Infrastructure:Spectre]]), and the router ([[Infrastructure:Shellshock]]). Deploying the router is described in [https://vtluug.org/rtfm.txt rtfm.txt], but VM deployment is entirely automated via ansible, which is *sick as fuck*. It only works for ubuntu server and redhat enterprise (alma, rocky, centos) distros, but for examplesthose it works brilliantly -- add a VM to [https://github.com/vtluug/ansible/blob/master/roles/deploy-vms/defaults/main.yml this file] and run the ansible playbook -- the new VM will automagically create. Web traffic!We run DNS through Gandi. Ask an officer to add you to the VTLUUG org on that website ([[User:Rsk]] has access, if you're reading this in the far future and need it).Each host gets a direct A record pointing at it's IP address, and web content *all* points to [[Infrastructure:Sczi]] via CNAME records. Sczi's docker config has an nginx container that handles certificates and reverse proxying. Acidburn is our singular "traditionally managed" server. It runs many services, mail among them, and all are running as services on the VM itself, not a container in sight (sans the IRC <-> Matrix bridge, which is there for IP whitelisting reasons. You can redeploy it from ansible, but it won't have the same soul. Try not to break it. AuthWe run two Authentication servers, [[Infrastructure:Chimera]] and [[Infrastructure:Sphinx]]. They're both on the same FreeIPA network and can be deployed via ansible. FreeIPA is a full-stack authentication provider. Part of our ansible playbook for LUUG hosts runs ipa-client-install, which sets up the hosts as "clients" to this FreeIPA network, and allows users with FreeIPA accounts to log in via ssh, reflecting usergroups over on to the system. [[Infrastructure:Spectre]] notably is *not* a FreeIPA client, because it's intended for use by non-LUUG entities (whether that be personal member VMs or ones loaned out to other student orgs). The root account password is in the [https://git.vtluug.org/officers/vtluug-admin vtluug-admin] private repository. Ask someone to be added to the officers group.
== Networks ==
We ''should'' have the following networks in place:
* [[Infrastructure:Meltdown|meltdown]] and [[Infrastructure:Spectre|spectre]] br0 on eno1 <--> enp4s0 on [[Infrastructure:Joey|joey]]. This is the main LUUG network.
** 10.98.0.0/16 for VTLUUG NAT
** IPv6 via prefix delegation on 607:b400:6:cc80/64
* Clone <code>https://github.com/vtluug/scripts</code>. This is referred to as 'SCRIPTS' in this guide.
* Clone <code>https://github.com/vtluug/ansible</code> and install ansible. This repo is referred to as 'ANSIBLE' in this guide.
* Have access to the [https://git.vtluug.org/officers/vtluug-admin repo officers/vtluug-admin] on gitolite[https://git.vtluug.org gitea].
* Understand the [[Infrastructure:Network|Network]] and [[Infrastructure]].
* Put your SSH key on [[Infrastructure:Meltdown|meltdown]]
[[Category:Infrastructure]]
[[Category:Howtos]]
[[Category:Needs Restorationrestoration]]