Skip to main content

Measuring and assessment of risks

Measurement of Risk

The digital age has enabled organisations to store and disseminate data at ease. The size and volume of data that gets stored in the cloud are incomprehensibly humongous and growing in leaps and bounds. If ever there is a security breach or data leakage, it would be disastrous for all stakeholders. The organisations that have encountered bitter experiences have measured the potential risks and implemented various cloud security measures foreseeing the risks associated with them. Let us discuss these briefly:

  • Measures related to data security such as data encryption standards, key management and hierarchical access.
  • Client side efforts - as nothing can prevent data espionage when the customers are not vigilant enough to avert disasters.
  • Geographical location and physical protection of data centers
  • Service level agreements to ensure proper service by the cloud service providers
  • Access controls to ensure efficient, effective and secure sharing of resources between clients utilising the same infrastructure
  • Financial controls within and outside the organisation to ensure that both internal teams and cloud service providers operate well within budgets allocated.

Risk Assessment

The nebulous nature of the cloud has brought in the perception of high risk and low control over infrastructure and data utilised by an enterprise. This is one of the primary reasons why people in the executive team of organisations want to know what could potentially happen if they move into the cloud. Whenever something new comes up, people take time to accept and adopt.
Even though the executive teams understand the potential, most of them are very comfortable with on-premises software and solutions. This is also due to risk aversion towards cloud – as with all other technologies. Therefore, a thorough assessment of risks must be conducted before the commencement of the project. The risk assessment strategy used by an organisation must contain the following elements:
  • Effective Control Mechanism: All the current controls over data are to be analysed. If it doesn't provide adequate protection for the data or service, then necessary data control mechanisms are to be implemented.
  • Necessary Periodical Audits: The cloud service provider and the services rendered are to be analysed and audited on a monthly, quarterly or annual basis. Any kind of discrepancies in service should be noted and informed, and necessary corrective measures should be implemented.
  • Technical Security Architecture: A thorough analysis of the present technical architecture of the cloud service provider should be done. Firewalls, Virtual Private Network provisions, patching, intrusion-prevention mechanism and network segregation are a few things to be analysed well. These are potential high-risk areas especially when confidential customer data is at stake.
  • Data Integrity: The cloud service provider would be rendering services to multiple clients at a time. How well the data is stored, what kind of hardware is being used, if the confidential data is being stored in a shared storage etc. - are to be analysed and understood beforehand. It is much better to have discussions with the cloud service provider before even moving all the data to the cloud.
  •  Data Encryption: The name says it all. The data encryption standards that the cloud service providers utilise is to be audited beforehand. Strict investigation has to be carried out in this aspect, as it is one of the high-risk areas. Sony suffered a major outage in its PlayStation Network in 2011 due to its poor data encryption standards and hackers exploiting it.
  • Disaster Recovery Plan: What happens when there is an earthquake? Or flooding or some other natural calamity that hits the data centre in which all the confidential data is being stored? Before getting into contracts, the disaster recovery and contingency plan provided by the cloud service provider should be reviewed thoroughly. Internally, the organisation should have a clear business continuity plan to ensure that the business does not get affected if in case there is a disaster.
  • Standard Procedures: It is good to evaluate the standard procedures followed by the cloud service provider internally in their operations. A typical example would be the offsite tape backup procedure for all the data stored in their data centre. Another example would be a background pre-employment screening procedure to see if any of the employees working in the data center or those to be involved in managing the data centre has any malicious intent.
  • Business Operations of the Cloud Service Provider: The current operational and financial conditions of the cloud service provider should be diligently verified along with the history of operations. For publicly traded companies, it is easy to find this information. For private companies, either an internal team can do the due-diligence or a third-party can do the background check.

Comments

Popular posts from this blog

Special Permissions in linux

The setuid permission on an executable file means that the command will run as the user owning the file, not as the user that ran the command. One example is the passwd command: [student@desktopX ~]$ ls -l /usr/bin/passwd -rw s r-xr-x. 1 root root 35504 Jul 16 2010 /usr/bin/passwd In a long listing, you can spot the setuid permissions by a lowercase s where you would normally expect the x (owner execute permissions) to be. If the owner does not have execute permissions, this will be replaced by an uppercase S . The special permission setgid on a directory means that files created in the directory will inherit their group ownership from the directory, rather than inheriting it from the creating user. This is commonly used on group collaborative directories to automatically change a file from the default private group to the shared group, or if files in a directory should be

The Seven-Step Model of Migration

Irrespective of the migration approach adopted, the Seven-step Model of Cloud Migration creates a more rational point of view towards the migration process and offers the ability to imbibe several best practices throughout the journey Step 1: Assess Cloud migration assessments are conducted to understand the complexities in the migration process at the code, design and architectural levels. The investment and the recurring costs are also evaluated along with gauging the tools, test cases, functionalities and other features related to the configuration. Step 2: Isolate The applications to be migrated to the cloud from the internal data center are freed of dependencies pertaining to the environment and the existing system. This step cuts a clearer picture about the complexity of the migration process. Step 3: Map Most organisations hold a detailed mapping of their environment with all the systems and applications. This information can be used to distinguish between the

RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version

import tweepy /usr/lib/python2.7/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version!   RequestsDependencyWarning) Traceback (most recent call last):   File "<stdin>", line 1, in <module>   File "/usr/local/lib/python2.7/dist-packages/tweepy/__init__.py", line 14, in <module>     from tweepy.api import API   File "/usr/local/lib/python2.7/dist-packages/tweepy/api.py", line 12, in <module>     from tweepy.binder import bind_api   File "/usr/local/lib/python2.7/dist-packages/tweepy/binder.py", line 11, in <module>     import requests   File "/usr/lib/python2.7/dist-packages/requests/__init__.py", line 97, in <module>     from . import utils   File "/usr/lib/python2.7/dist-packages/requests/utils.py", line 26, in <module>     from ._internal_utils import to_native_string   File "/usr/lib/python2.

tag