Explain the different types of synchronization networks in digital switching system

Abstract:-

     The term big data arose under the explosive increase of global data as a technology that is able to store and process big and varied volumes of data, providing both enterprises and science with deep insights over its clients/experiments. Cloud computing provides a reliable, fault-tolerant, available and scalable environment
to harbour big data distributed management systems. Within the context of this paper we present an overview of both technologies and cases of success when integrating big data and cloud frameworks. Although big data solves much of our current problems it still presents some gaps and issues that raise concern and need improvement. Security, privacy, scalability, data governance policies, data heterogeneity, disaster recovery
mechanisms, and other challenges are yet to be addressed. Other concerns are related to cloud computing and its ability to deal with exabytes of information or address exaflop computing efficiently.

1. Cloud computing :-  The concept of big data became a major force of innovation across both academics and corporations. The paradigm is viewed as an effort to understand and get proper insights from big datasets (big data analytics), providing summarized information over huge data loads. As such, this paradigm is regarded by corporations as a tool to understand their clients, to get closer to them, find patterns and predict trends.
Furthermore, big data is viewed by scientists as a mean to store and process huge scientific datasets.This concept is a hot topic and is expected to continue to grow in popularity in the coming years. Although big data is mostly associated with the storage of huge loads of data it also concerns ways to process and extract knowledge from it (Hashem et al., 2014). The five different aspects used to describe big data (commonly referred to as the five “V”s) are Volume, Variety, Velocity, Value and Veracity. Cloud computing is another paradigm which promises theoretically unlimited on-demand services to its users. Cloud’s ability to virtualize resources allows abstracting hardware, requiring little interaction with cloud service providers and enabling users to access terabytes of storage, high processing power, and high availability in a pay-as-you-go model (González-Martínez et al., 2015). Moreover, it transfers cost and responsibilities from the user to the cloud provider, boosting small enterprises to which getting started in the IT business represents a large endeavour, since the initial IT setup takes big effort as the company has to consider the total cost of ownership (TCO), including hardware expenses, software licenses, IT personnel and infrastructure maintenance. Cloud computing provides an easy way to get resources on a pay-as-you-go model, offering scalability and availability, meaning that companies can easily negotiate resources with the cloud provider as required. Cloud providers usually offer three different basic services: Infrastructure as a Service (IaaS); Platform as a Service (PaaS); and Software as a Service (SaaS).
    These three basic services are closely related: SaaS is developed over PaaS and ultimately PaaS is built atop of IaaS.
    ‎  Since the cloud virtualizes resources in an on demand fashion, it is the most suitable and compliant framework for big data processing, which through hardware virtualization creates a high processing power environment for big data.

2. Big Data in the Cloud :-  Storing and processing big volumes of data requires scalability, fault tolerance availability..Cloud computing delivers all these through hardware virtualization. Thus, big data and cloud computing are two compatible concepts as cloud enables big data to be available, scalable and fault tolerant.

      Business regard big data as a valuable business opportunity. As such, several new companies such as Cloudera, Hortonworks and many other have started to focus on delivering Big Data as a Service (BDaaS) or DataBase as a Service (DBaaS). Companies such as Google, IBM,  Microsoft also provide ways for consumers to consume big data on demand.

3. Big data issue :-  As the amount of data grows at a rapid rate, keeping all data is physically cost-ineffective.Therefore, corporations must be able to create policies to define the life cycle and the expiration date of data (data governance). Moreover, they should define who accesses and with what purpose clients’ data is accessed. As data moves to the cloud, security and privacy become a concern that is the subject of broad research.

              Big data DBMSs typically deal with lots of data from several sources (variety), and as such heterogeneity is also a problem that is currently under study. Other issues currently being investigated are disaster recovery, how to easily upload data onto the cloud, and Exaflop computing.

              ‎Within this section we provide an overview over these problems.
              ‎3.1 Security :- Cloud computing and big data security is a current and critical research topic.This problem becomes an issue to corporations when considering uploading data onto the cloud. Questions such as who is the real owner of the data, where is the data, who has access to it and what kind of permissions they have are hard to describe. Corporations that are planning to do business with a cloud provider should be aware.

              3.2 Privacy :- The harvesting of data and the use of analytical tools to mine information raises several privacy concerns. Ensuring data security and protecting privacy has become extremely difficult as information is spread and replicated around the globe. Analytics often mine users’ sensitive information such as their medical records, energy consumption, online activity, supermarket records etc. This information is exposed to scrutiny, raising concerns about profiling, discrimination, exclusion and loss of control. Privacy is undoubtedly an issue that needs further improvement as systems store huge quantities of personal information every day.

              ‎3.3 Heterogeneity :- Big data concerns big volumes of data but also different velocities (i.e., data comes at different rates depending on its source output rate and network latency) and great variety. The latter comprehends very large and heterogeneous volumes of data coming from several autonomous sources. Variety is one of the “major aspects of big data characterization”  which is triggered by the belief that storing all kinds of data may be beneficial to both science and business.

              ‎Data comes to big data DBMS at different velocities and formats from various sources. This is because different information collectors prefer their own schemata or protocols for data recording, and the nature of different applications also result in diverse data representations. Dealing with such a wide variety of data and different velocity rates is a hard task that Big Data systems must handle. This task is aggravated by the fact that new types of file are constantly being created without any kind of standardization. Though, providing a consistent and general way to represent and explore complex and evolving relationships from this data still poses a challenge.
        Above that there are also mane other issue like Data Governance, Disaster recovery,  Exaflop Computing etc.

        Conclusion :- With data increasing on a daily base, big data systems and in particular, analytic tools, have become a major force of innovation that provides a way to store, process and get information over petabyte datasets. Cloud environments strongly leverage big data solutions by providing fault-tolerant, scalable and available environments to big data systems. Although big data systems are powerful systems that enable both enterprises and science to get insights over data, there are some concerns that need further investigation. Additional effort must be employed in developing security mechanisms and standardizing data types. Another crucial element of Big Data is scalability, which in commercial techniques are mostly manual, instead of automatic. Further research must be employed to tackle this problem. Regarding this particular area, we are planning to use adaptable mechanisms in order to develop a solution for implementing elasticity at several dimensions of big data systems running on cloud environments. The goal is to investigate the mechanisms that adaptable software can use to trigger scalability at different levels in the cloud stack. Thus, accommodating data
peaks in an automatic and reactive way.

   

Post a Comment

0 Comments