Skip to content
This repository has been archived by the owner on Apr 18, 2019. It is now read-only.

v2.1.0 beta1 release

Pre-release
Pre-release
Compare
Choose a tag to compare
@ywang19 ywang19 released this 01 Jan 14:42
· 677 commits to master since this release

Special Notes

  • This is 2.1 beta 1 release, only very limited tests on it.
  • 27 new features are added, and 51 bugs are fixed in this release.

New Features

  • VSM-66 VSM interoperates with Ceph update
  • VSM-226 Need Documentation for VSM REST API
  • VSM-78 Integration with Cinder capacity filter
  • VSM-58 Calamari Integration
  • VSM-56 Report total RBD Image commit
  • VSM-32 support to define storage group with storage node or rack as unit
  • VSM-376 support to label device with by-uuid
  • VSM-382 undeploy/redeploy ceph cluster
  • VSM-389 Need to automatically retrieve the osd info from existing cluster.
  • VSM-386 batch add osds
  • VSM-355 there are some used disk paths list in the data path field and journal dik path field on page 'add new osd'
  • VSM-90 Monitor Status page improvements.
  • VSM-96 UI responsiveness
  • VSM-98 Server Configuration Change.
  • VSM-341 expect to have some utilties to help make automation tests.
  • VSM-352 servermgmt page autorefrush too frequent
  • VSM-372 narrow the attack surface from VSM to Openstack cluster
  • VSM-373 at adding new OSD page, expect to list device name like /dev/sdd1 instead of pci bus address.
  • VSM-140 Ceph Development (Epic 6): Prototype Calamari/VSM dashboard implementation
  • VSM-88 On monitor status page, report what server each monitor is running on.
  • VSM-242 Allow user to modify ceph.conf outside VSM
  • VSM-15 VSM-backup prompt info not correct
  • VSM-124 [CDEK-1852] VSM | adding possibility to manipulate ceph values in cluster.manifest file
  • VSM-4 Average Response Time" missing in dashboard Overview panel "VSM Status" section.
  • VSM-184 add automated script to help deploy VSM on multiple nodes
  • VSM-159 add issue reporting tool
  • VSM-156 add sanity check tool to help identify potential issues before or after deployment

Resolved bugs

  • VSM-349 click 'create cluster',I got the tip:there are some zones no monitor created
  • VSM-411 When create cluster, there are four servers and choose all servers as storage node but only three as monitors, the cluster can not be created sucessfully.
  • VSM-329 Remove Monitors button in Manage Servers hangs when Monitor node also has MDS daemon
  • VSM-400 the UI of all server operator pages will appear "loading" without any operation
  • VSM-356 I got warning info means that the number of pg in each osd is too large after upgrade ceph from a lower version to hammar
  • VSM-397 mysqld takes up 100% CPU on one core and cause VSM dashboard to become unusable
  • VSM-412 After remove server or remove monitor, failed to add the monitor back.
  • VSM-391 the "ceph df" number is not consistent with pool quota
  • VSM-399 the UI messy of the manage servers page
  • VSM-402 after stop server, then start server, the osd tree changes
  • VSM-392 Have removed the volume from the openstack, but from the vsm rbd status page, the rbd list still include the volume
  • VSM-384 stuck at restart all ceph servers after stopped them all from UI
  • VSM-394 present more than one pool to openstack cinder, it always creates volumes on a pool
  • VSM-321 no upstart mechanism used for ubuntu when controlling ceph service
  • VSM-336 On Dashboard, even no cluster is created, the VSM version and uptime should be displayed
  • VSM-24 [CDEK-1661] VSM Dashboard | Manage Servers | Reset server status - works not correctly.
  • VSM-365 Creating Cluster stucks at ceph.conf creation when running VSM on CentOS 7.1
  • VSM-19 CDEK-1613] VSM | Reset Server Status button - return Error:Network error
  • VSM-379 Trace back in browser when using reset server status action buttons
  • VSM-381 run diamond through service instead of current process launching
  • VSM-378 Performance data is retrieved from outside nodes
  • VSM-374 the down server is not reflected in VSM
  • VSM-375 Malformed JSON in 'Integrate Cluster' function
  • VSM-366 the password for openstack access is shown as plain text
  • VSM-312 vsm-node sets node status="Need more IP" if a Monitor only node does not have a cluster IP address..
  • VSM-367 can't create cluster at public cloud environment
  • VSM-368 The default password is not following the same password policy to include uppercase and digitals.
  • VSM-369 Change password: "!" doesnt' support in password even prompt message says OK
  • VSM-244 Internal server error when installing v1.1
  • VSM-224 Controller node error in /var/log/httpd/error_log - constantly ongoing messages [error] <Response [200]>
  • VSM-239 with automatic deployment, the execution is blocked at asking if start mysql service
  • VSM-193 hard-coded cluster id
  • VSM-179 keep ceph.conf up to date when executing "remove server" operations.
  • VSM-176 SSL certificate password is stored in a plain text file
  • VSM-177 wrong /etc/fstab entry for osd device mount point
  • VSM-166 cluster_manifest sanity check program gives incorrect advice for auth_keys
  • VSM-171 [CDEK1672] VSM_CLI | list shows Admin network in Public IP section
  • VSM-168 [CDEK1800] VSM_CLI | remove mds - doesn't update vsm database
  • VSM-121 Storage node unable to connect to controller although network is OK and all setting correct
  • VSM-123 Storage node will not be able to contact controller node to install if http proxy set
  • VSM-260 the check_network in server_manifest will be wrong when it has a single network card
  • VSM-236 no way to check manifest correctness after editing them
  • VSM-233 console blocks when running automatic installation procedure
  • VSM-33 negative update time in RBD list
  • VSM-216 Add storage group requires at least 3 nodes
  • VSM-113 [CDEK-1835] VSM | /var/log/httpd/error_log - constantly ongoing messages [error] <Response [200]>
  • VSM-51 Install Fails for VSM 0.8.0 Engineering Build Release
  • VSM-29 vsm-agent process causes high i/o on os disk
  • VSM-230 when presenting pool to openstack, cache tiering pools should be listed.
  • VSM-207 can't assume eth0 device name
  • VSM-26 [CDEK-1664] VSM | Not possible to replace node if ceph contain only 3 nodes.

Known issues