This tutorial is to get you running openstack inside virtualbox/vm for development and testing. You can do this in windows or in linux host machine. All you need is virtualbox installed. 



vagrant box add ubuntu/focal64 vagrant plugin install vagrant-disksize mkdir aio cd aio vagrant init ubuntu/focal64 notepad Vagrantfile
below the line that says:
config.vm.box = "ubuntu/focal64" add, the following line
config.disksize.size = "60GB"

also find the line that says, and change the RAM and CPU to what you can give. Remember, you need at least 8GB of RAM.

config.vm.provider :virtualbox do |vb|
# # Don't boot with headless mode
# vb.gui = true
#
# # Use VBoxManage to customize the VM. For example to change memory:
vb.customize ["modifyvm", :id, "--memory", "8192", "--cpus", "4"]
end
vagrant up
vagrant ssh
sudo su -

check that it has the right disk, cpu and ram [df -h, cat /proc/cpuinfo, free -m]

git clone https://git.openstack.org/openstack/openstack-ansible /opt/openstack-ansible
cd /opt/openstack-ansible
git checkout stable/victoria ##  latest stable
time ./scripts/bootstrap-ansible.sh
time ./scripts/bootstrap-aio.sh

The time is used to just record how much time it takes for it to complete in your machine. So that when you do it next time, you know the time you can take a break for coffee etc. 

cd /opt/openstack-ansible/playbooks
time openstack-ansible setup-hosts.yml
time openstack-ansible setup-infrastructure.yml

check if mysql is installed

ansible galera_container -m shell \
-a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"

output is something like:

Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e @/etc/openstack_deploy/user_variables.yml "
aio1_galera_container-6ba9b62c | SUCCESS | rc=0 >>
Variable_name Value
wsrep_cluster_conf_id 1
wsrep_cluster_size 1
wsrep_cluster_state_uuid c3d60795-a453-11e7-ac9d-1f7f5e424077
wsrep_cluster_status Primary
../scripts/inventory-manage.py -l
+---------------------------------------------+----------+--------------------------+---------------+----------------+----------------+----------------------+
| container_name | is_metal | component | physical_host | tunnel_address | ansible_host | container_types |
+---------------------------------------------+----------+--------------------------+---------------+----------------+----------------+----------------------+
| aio1_aodh_container-02ff8009 | None | aodh_api | aio1 | None | 172.29.236.253 | None |
| aio1_ceilometer_central_container-467998a6 | None | ceilometer_agent_central | aio1 | None | 172.29.236.249 | None |
| aio1_cinder_api_container-301b6ce3 | None | cinder_api | aio1 | None | 172.29.237.89 | None |
| aio1_cinder_scheduler_container-332ee092 | None | cinder_scheduler | aio1 | None | 172.29.236.250 | None |
| aio1_designate_container-07dc6a55 | None | designate_api | aio1 | None | 172.29.239.85 | None |
| aio1_galera_container-d3b4cce2 | None | galera | aio1 | None | 172.29.239.241 | None |
| aio1_glance_container-d1defbaa | None | glance_api | aio1 | None | 172.29.237.109 | None |
| aio1_gnocchi_container-b8746160 | None | gnocchi_api | aio1 | None | 172.29.237.10 | None |
| aio1_heat_apis_container-77899b56 | None | heat_api_cloudwatch | aio1 | None | 172.29.238.230 | None |
| aio1_heat_engine_container-3ab3337f | None | heat_engine | aio1 | None | 172.29.237.217 | None |
| aio1_horizon_container-7737c7f6 | None | horizon | aio1 | None | 172.29.239.118 | None |
| aio1_keystone_container-f5be678f | None | keystone | aio1 | None | 172.29.236.148 | None |
| aio1_memcached_container-40cf9532 | None | memcached | aio1 | None | 172.29.237.208 | None |
| aio1_neutron_agents_container-a5005a3e | None | neutron_agent | aio1 | None | 172.29.239.219 | None |
| aio1_neutron_server_container-bff8d8d0 | None | neutron_server | aio1 | None | 172.29.238.24 | None |
| aio1_nova_api_metadata_container-58478dfd | None | nova_api_metadata | aio1 | None | 172.29.239.228 | None |
| aio1_nova_api_os_compute_container-4a0eec55 | None | nova_api_os_compute | aio1 | None | 172.29.238.78 | None |
| aio1_nova_api_placement_container-180141fd | None | nova_api_placement | aio1 | None | 172.29.239.49 | None |
| aio1_nova_conductor_container-4c4a23b5 | None | nova_conductor | aio1 | None | 172.29.236.251 | None |
| aio1_nova_console_container-1608542d | None | nova_console | aio1 | None | 172.29.236.120 | None |
| aio1_nova_scheduler_container-e4434fa3 | None | nova_scheduler | aio1 | None | 172.29.237.61 | None |
| aio1_repo_container-d62834e6 | None | pkg_repo | aio1 | None | 172.29.238.47 | None |
| aio1_rabbit_mq_container-a409e4a4 | None | rabbitmq | aio1 | None | 172.29.239.157 | None |
| aio1_rsyslog_container-aa5cff0e | None | rsyslog | aio1 | None | 172.29.238.27 | None |
| aio1 | True | swift_acc | aio1 | None | 172.29.236.100 | aio1-host_containers |
| aio1_swift_proxy_container-938b3878 | None | swift_proxy | aio1 | None | 172.29.239.2 | None |
| aio1_utility_container-516ae092 | None | utility | aio1 | None | 172.29.239.96 | None |
+---------------------------------------------+----------+--------------------------+---------------+----------------+----------------+----------------------+

time openstack-ansible setup-openstack.yml

###

So  http(s) :// ip-address of the server and you will get the horizon interface

grep admin /etc/openstack_deploy/user_secrets.yml   and you will see the admin password

For console  check the output of the /opt/openstack-ansible/scripts/inventory-manage.py -l and check the utility container ( at the last )

ssh <ip of utility container>    .. then   source openrc

then you can run commands there as admin ..

follow  http://www.openstackfaq.com/openstack-add-project-and-users/ and http://www.openstackfaq.com/openstack-add-images-to-glance/

to add projects and users. Then you can use the user to login to horizon and do the rest.

About The Author