- At this point we should think about what is required to meet our target requirements.
- We’re going to put off a number of considerations as they depend on robust use of ‘userdata’ as we will be exploring the use of Jinja to template userdata later in the series.
- At this point we’re still concentrating on bringing up instances (and deleting old instances if needed).
- We’re also not making a ‘Zero Downtime’ project, where making changes is expected to result in no visible (even brief) disruption in service; that is overkill for our requirements.
What We Need for Our Infrastructure (That We Don’t Have Yet)
- Use of OpenStack Security Groups
- Handling creation of instances on both public and private networks
- Handling userdata for instances only on a private network (requires use a ‘config drive’)
- Attaching existing volumes
- Userdata (as mentioned, that’s for later in the series)
You might notice that we are not concerned about creating volumes ‘automatically’. That is because for our environment volumes are only used for permanent data. We are also not creating ‘server farms’ for large scale web apps. In addition, volumes come with additional fees (even if not all that large), so we want to keep volume creation manual.
The scripts described on this page are available in a Git repo @ https://github.com/danielfdickinson/ivc-in-the-wtg-experiments.
Table of Contents
Final thoughts (to date)
As you can see the experiments have gone surprisingly smoothly, and we’ve been able to concentrate in iteratively improving our script. Since this is an experimental process we haven’t been using TDD or CI (Test Driven Development or Continuous Integration). Adding those will require some thought as it will need to be able to spin up instances during the test process, and may not be something we want to do on a public cloud offering.
For those who read this, we hope find the process we’ve shown proves informative, useful, and interesting. Of course we’re not done yet, so we hope you will keep watching for update.