forked from kiyoad/trusty-docker-openstack
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREAD.A.TOR
147 lines (113 loc) · 6.24 KB
/
READ.A.TOR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
#
# trusty-docker-openstack
OpenStack Demo for Ubuntu trusty by Ansible and Docker
## Abstract
This Vagrantfile will deploy the OpenStack Icehouse envirionment on Ubuntu 14.04 LTS virtual machines with VirtualBox.
OpenStack components that run on *controller* uses Docker container.
![trusty-docker-openstack components and networks](https://raw.githubusercontent.com/kiyoad/trusty-docker-openstack/master/images/trusty-docker-openstack.gif)
## Requirements
My development environment is shown below. I think that Ubuntu LinuxBox also works.
$ cat /etc/redhat-release
$ uname -a
$ cat /proc/meminfo | head -6
$ vboxmanage --version
$ vagrant --version
$ ansible --version
configured module search path = None
$ vagrant box list
My Vagrant base box 'ubuntu/trusty64' obtained from the following.
https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box
## How to use
1. Modify the following part in `ansible/group_vars/all` so that it fits your environment.(see previous picture.)
```
neutron_ext_subnet_allocation_pool_start: 10.0.0.240
neutron_ext_subnet_allocation_pool_end: 10.0.0.249
neutron_ext_subnet_gateway: 10.0.0.1
neutron_ext_subnet_cidr: 10.0.0.0/24
neutron_demo_subnet_gateway: 10.0.99.1
neutron_demo_subnet_cidr: 10.0.99.0/24
```
1. run `vagrant up` and `vagrant reload`
1. login to *controller* by `vagrant ssh controller`
1. Install the CirrOS by using script like this.
```
vagrant@controller:~$ sudo su -
root@controller:~# ./get-cirros-and-reg.sh
```
1. Create initial virtual network by using script like this.
```
root@controller:~# ./create-sample-network.sh
```
1. Open the dashboard with the browser and login.
```
http://(VirtualBox host):60080/horizon/avth/login/
openstack_admin_password: openstack
openstack_demo_password: openstack
```
## Memo
1. You can check the OpenStack components container state by `docker ps -a` like this.
```
root@controller:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96454d07c559 kiyoad/cinder:latest service_launcher.sh 37 hours ago Up 37 hours cinder
e7d06d0d0419 kiyoad/dashboard:latest service_launcher.sh 37 hours ago Up 37 hours dashboard
0fd95c8210ce kiyoad/neutron:latest service_launcher.sh 37 hours ago Up 37 hours neutron
2972be5b0dbe kiyoad/nova:latest service_launcher.sh 37 hours ago Up 7 hours nova
80d38b54eafd kiyoad/glance:latest service_launcher.sh 37 hours ago Up 37 hours glance
2744d633e4c1 kiyoad/keystone:latest service_launcher.sh 37 hours ago Up 37 hours keystone
6ab5afa8d189 rabbitmq:latest /docker-entrypoint.s 37 hours ago Up 37 hours openstack-rabbitmq
a2bb4840abc0 mysql:latest /entrypoint.sh mysql 37 hours ago Up 37 hours openstack-mysql
```
1. The OpenStack components logfiles in *controller* are in the following.
```
root@controller:~# ls -lR /var/log/openstack/
/var/log/openstack/:
total 24
drwxrwxrwx 2 root root 4096 May 1 00:46 cinder
drwxrwxrwx 2 root root 4096 May 1 00:44 dashboard
drwxrwxrwx 2 root root 4096 May 1 00:40 glance
drwxrwxrwx 2 root root 4096 May 1 00:38 keystone
drwxrwxrwx 2 root root 4096 May 1 00:43 neutron
drwxrwxrwx 2 root root 4096 May 1 00:42 nova
/var/log/openstack/cinder:
total 1404
-rw-r--r-- 1 landscape messagebus 677635 May 2 07:15 cinder-api.log
-rw-r--r-- 1 landscape messagebus 753819 May 2 14:34 cinder-scheduler.log
/var/log/openstack/dashboard:
total 180
-rw-r--r-- 1 root root 170029 May 2 07:16 access.log
-rw-r--r-- 1 root root 5536 May 2 07:16 error.log
-rw-r--r-- 1 root root 0 May 1 00:44 other_vhosts_access.log
/var/log/openstack/glance:
total 948
-rw-r--r-- 1 landscape messagebus 659363 May 2 07:16 api.log
-rw-r--r-- 1 landscape messagebus 302449 May 2 07:16 registry.log
/var/log/openstack/keystone:
total 932
-rw-r--r-- 1 landscape messagebus 863661 May 2 07:16 keystone-all.log
-rw-r--r-- 1 landscape messagebus 85877 May 1 00:38 keystone-manage.log
/var/log/openstack/neutron:
total 34716
-rw-r--r-- 1 landscape messagebus 35544248 May 2 14:34 server.log
/var/log/openstack/nova:
total 8304
-rw-r--r-- 1 landscape messagebus 4535163 May 2 07:16 nova-api.log
-rw-r--r-- 1 landscape messagebus 596648 May 2 14:34 nova-cert.log
-rw-r--r-- 1 landscape messagebus 1460242 May 2 14:34 nova-conductor.log
-rw-r--r-- 1 landscape messagebus 593042 May 2 14:34 nova-consoleauth.log
-rw-r--r-- 1 landscape messagebus 13683 May 1 00:42 nova-manage.log
-rw-r--r-- 1 landscape messagebus 1293589 May 2 14:34 nova-scheduler.log
```
1. You can login to each OpenStack components by using ssh like this.
```
root@controller:~# ssh -i /opt/docker/common/maintenance_id_rsa -p 65501 maintainer@localhost # keystone
root@controller:~# ssh -i /opt/docker/common/maintenance_id_rsa -p 65502 maintainer@localhost # glance
root@controller:~# ssh -i /opt/docker/common/maintenance_id_rsa -p 65503 maintainer@localhost # nova
root@controller:~# ssh -i /opt/docker/common/maintenance_id_rsa -p 65504 maintainer@localhost # neutron
root@controller:~# ssh -i /opt/docker/common/maintenance_id_rsa -p 65505 maintainer@localhost # dashboard(horizon)
root@controller:~# ssh -i /opt/docker/common/maintenance_id_rsa -p 65506 maintainer@localhost # cinder
```
Or you can use *nsenter*(see /opt/docker/common/nse.sh).
## Bugs
1. Sometimes the dashboard doesn't work correctly when restart the *controller* node. I think that the previous Apache has not been stopped correctly, it has failed the next boot. Wait a few minutes, perhaps the monit will restart the Apache.