Tuesday, 11 February 2020

Pervasive and Cloud Computing


Pervasive and Cloud Computing: 
A Unique Association
By
Rana Sohail
MSCS (Networking), MIT



Abstract— As of today, mobile devices are one of the most handy source available which can give all types of updates like a weather forecast, news, happenings etc. around the world. Moreover, all sorts of information can be downloaded within no time. So the mobile devices are not only a source of communication but also used for other purposes like processing of data, computing, uploading, downloading of audios & video files etc. “what and what not”. Keeping in view the storage capacity of such devices and data load while performing number of tasks, it is obvious that problem of blocking is expected. There is a need for some external help to ease the situation and a platform of cloud computing provides such environment. But it is quite difficult to have an access to such platforms easily. Now a need arises where some way is to be sought where the mobile devices could access the cloud computing platform and further the access should be guaranteed anywhere and all the time. Virtual cloud computing platform is one of the way which could be useful due to the pervasiveness of mobile devices.  This paper will make an endeavour to show the pervasive and cloud computing association and how to get benefitted from the creation of virtual cloud computing platform.     

KeywordsMobile devices, cloud computing, pervasive computing, virtual cloud platform


I. INTRODUCTION

Inventions are linked with visions, people visualize and the things are invented but in due course of time. Mark Weiser, father of ubiquitous computing, visioned two decades back about the disappearance of computing system from real world and working at the backhand was a dream in that times. Since then continuous and gradual progress in hardware and software (SW) fields made many things like “dream come true” situation. The human-computer interaction (HCI) has increased in daily life because of distributed system and mobile computing which are the foundational pillars to pervasive computing. The personal computer (PC) and local area network (LAN) connection is based on distributed system which is basically on static status. In mobile computing involvement of mobility is added to the distributed system. The ubiquitous or pervasive computing is no more dream due the inventions of mobile devices (smartphones, laptops, and tablets etc.), wireless LANs, sensing devices and the controller machineries as explained in [1] & [2]. People use
these gadgets as they can’t live without and this is what pervasiveness is all about as everyone forget about technology behind these devices. 
In pervasive computing, mobile devices are dependent upon the external resources like internet, intranet in terms of different services like instant messaging (IM), emailing, weather forecast, all sorts of information in any field of life, social media etc. The dependency is based on less data storage space, processing speed and the most important is requirement of continuous updating of information as the outdating of everything within no time is the happening of today. In late 2007, the term “Cloud Computing” was introduced which could provide different services invisibly as if obscured by a cloud. These services include guaranteed quality of service (QoS), reliability, timely delivery of desired data etc. 
There is a bonding among the pervasive computing and cloud computing as the users place their data (photos, files) and applications to the cloud and later access in a pervasive way. This paper will analyse the bonding/ association among the pervasive and cloud computing, and further explore the virtual cloud computing and its advantages as in [3]. 
The paper is organized into sections. In section II, pervasive computing is described. In section III, cloud computing is highlighted. In Section IV, the association among the pervasive and cloud computing are analysed. In section V, the virtual pervasive cloud computing platform is discussed. In section VI, the discussion is concluded.


II.  PERVASIVE COMPUTING

Pervasive computing can be explained as the relationship of distributed system and mobile computing with it. The conceptual parameters of distributed system once combined with the wireless parameters creates the mobile computing and the outcome is further combined with some other parameters which results into pervasive computing. It can further be understood by figure 1 given below as explained in [1]. It depicts the conceptual relationships and creation of pervasive computing.

A. Distributed System 
It can be defined as a collection of independent computers which use it as a single system. It is a static system where components are connected with wire. Base stations are the nodes and wire is the source of communication among each other. If one station breaks down then the network may get non-operational for a while temporarily in that region only. The LAN and WLAN leads to the distributed system. Internet, intranet, ATM machines (bank), etc. are few examples of distributed computing. There are three major characteristics upon which its structure is standing; concurrency, no global clock, and independent failures. Concept of “client server” where all resources are managed centrally at server side and it is the hallmark of this system. There are number of benefits of this system like resource sharing, scalability, fault tolerance and 24/7 availability and above all the high speed performance. All components have access to the base stations all the time without any major hindrance. The servers hold the data which is made available to all the clients. Since the system is wired so security is also an aspect which has to be taken care of. Figure 2 below explains the relationship of all the components merging and creating the distributed system. The conceptual parameters as explained in [4] are as under:-
1)  Heterogeneity:  All  components  must  interact among each other although they have differences in terms of operating system  (OS),  HW  architectures,  protocol communications, programming languages, interfaces, models of security etc.
2)  Transparency:  It means that system should not show the users any inter-component complexity and the all the deployed components should behave like a single unit.
3)  Fault Tolerance:  If  one  or  some components face failure due to any reasons then it has no effect on the communications of others. The entire system take to negative effect due to the failure of one, two or more components. The solution is hardware (HW) redundancy and SW recovery.
4)  Scalability:  Even if the number of users load is increased along with any resource addition, the performance of the system should be enhanced instead of any failure or breakdown.
5)  Concurrency:  The system should not collapse if there is any simultaneous access by the programs to any resource is carried out.
6)  Openness and Extensibility:  All the interfaces of different programs should be clearly visible publicly to the users as a separate entity. The interfaces should be available for any extensions to both; the existing and any new components.
7)  Migration and Load Balancing:  The system should permit the tasks to move around without any hindrance and moreover the running applications and login users should also not be affected. Once such provision of task movement is present then there is a requirement of load distribution among the available resources for performance improvement.
8)  Security: This is very important aspect as wires and components can be secured from any attacks but it is quite difficult to secure data which is in the air. Therefore all the resources should be secured in advance so that only authorized users and applications can access for any tasks.

B. Mobile Computing
It is an upgraded version of distributed computing but not static. The system is further miniaturized and addition of wireless networking leads to the mobile computing. Mobile units are connected wirelessly with the base stations and among each other. The communication is via spectrum on some frequency. Since all or some components are on move all the time so there is a requirement of a dedicated SW which takes care of certain problem areas. Wireless connectivity is done at short range like infrared data association (IrDA), Bluetooth, WLAN (IEEE 802.11.a, n) and long rage like 3G, EDGE, 4G etc. Implementation of the concept of “any time”, “anywhere” and “with anyone” in a befitting manner is achieved. There are certain limitations of computing power, battery power, memory, storage space, bandwidth and latency, interface elements like screen size or input model. Figure 3 below explains the upgrading of distributed system once new components are integrated to make the static system into a moving one and thus mobile computing is created. Details are as under:-
1)  Mobile Networking:  The wireless networking involves the IP address, protocols, and TCP. With all these components the connectivity is not confirmed in terms of performance and reliability. The reasons behind are the locations in the areas, if operating in the building then connectivity will be reliable with high bandwidth but once outdoor it will be other way round as in [5]. 
2)  Mobile Information Access:  It  can  be achieved through controlling the data consistency as in [6], disconnected operation as in [7] and file access through bandwidth adaptively as in [8].
3)  Adaptive Application:  The system is supported by some adaptive applications like using proxy for transcoding as in [9] and management of resources adaptively as in [10].
4)  Energy-Aware System:  There  are  number  of techniques through the system level energy can be saved. These techniques are like energy – aware adaptation as in [11], processors can be scheduled with variable speed as in [12] and memory should be made energy-sensitive for managing it efficiently as in [13] 5)  Location Sensitivity:  It depends upon system which should know the location and aware of it as in [14]. Furthermore, it should have devices which could sense the location as in [15].



C. Pervasive Computing
It is an upgraded version of mobile computing and added its own properties to it. It can be explained as the absence of the “Presence of Computing”. In other words all the computing is done in the back ground and nothing is visible to the users. They do use it as a routine life activity and don’t know about what happens once they push a button of any appliance under their use. They just get the work done that may be getting some service physically or virtually. So it is also known as invisible computing. The concept of smart space is integrated where all places around us are connected through sensors (Wi-Fi) which can be accessed by all users. Due to the smart space environment a huge number of users is obvious so problems like bandwidth congestion, energy crisis and distraction is not new thing. This aspect invites the scalability problem which is addressed by localizing it. Figure 4 below depicts the pervasive computing creation. Details are as under:-
1)  Smart Spaces:  An environment is provided to the users which is basically an overlay digital infrastructure. These environments, also called smart environments or smart spaces, are developed by connecting different equipment to the mobiles devices and wireless sensor networks (WSN) as in [16]. These smart spaces, indoor or outdoor, are embedded with computer infrastructure connects two worlds thus making the one world to control another.
2)  Invisibility:  The  technology  of  pervasive computing should be completely vanished from user’s environment and mind as well. The system works invisibly from the real world and users should be provided full services with unavoidable distraction only. A time will come when pervasive computing interaction would be at subconscious level of the users and they will be using it unconsciously to carry out routine daily life tasks as explained in [17, 18].
3)  Localized Scalability:  Due to the smart space environment there is a huge number of users therefore problems like bandwidth congestion, energy crisis and distraction are expected. This aspect invites the scalability problem which is addressed by localizing it. The rush of users interactions reduces when the users moves away from the smart space. On the other hand distance interactions would be irrelevant as in [19].
4)  Uneven Conditioning:  The  involvement  of pervasive computing will vary due to the factors of non-technical aspects like structure of organization, economical scenario and business models. The smart spaces behaviour are different and according to locations like closed places (rooms, offices, conference halls etc.) or open areas (corridors, building surroundings etc.). The uniform involvement would be very difficult in the near future therefore invisibility of pervasive computing is questionable as explained in [19].


III.  CLOUD COMPUTING

The terminology “Cloud Computing”, coined in late 2007, can be explained as provision of services by number of remotely deployed servers via internet. These services include everything like applications, computing infrastructure etc. Basically cloud computing includes both, the providers(HW and system SW) and services (applications). Yahoo, Gmail, Hotmail are good examples where users just login to their accounts and the SW and the storage is provided by the cloud as a service. So this fact is established that HW and system SW placed at data centers provide the on-demand applications (services) at any time and anyplace to the users through Internet facility. These services are available to everyone like public and private sectors, banks and enterprises, business and corporations etc. Virtualization plays an important role in cloud computing where illusion of infinite resources is created for the users with high level of scalability.
In 2009 cloud computing sees its real growth when Google launched “Google Apps” on Google Chrome and Microsoft introduced “Windows Azure” in 2011. Later “Office 2013” and “Window 8” were also cloud platform based as in [20].To understand the cloud computing in a better way certain details like cloud components, service models, deployment of services and number of applications are to be comprehended, details are as under:-
A. Components of Cloud Computing
Cloud computing is divided into four main components which interact with each other. The users request for some application or data through client computers to the distributed servers. The request is forwarded to datacenters where search is carried out and the requested data or application is collected by the servers and forwarded to the client. Process is asunder:-
1)  Client Computers:  The client computers are basically those devices which interact with the cloud. These devices namely Mobile, Thick and Thin clients are used by the users. Mobile clients (smartphones) are online applications which have the advantage of accessible from any location and have multiple platforms. Thick client also called as fat client is an offline networked application which has maximum resources available with it for processing purpose which is not in case of thin client as explained in [21].
2)  Distributed Servers:  The servers are deployed at different places in different regions. The geographical placement of servers does not have any hindrance over the performance and they work as they are placed next to each other.
3)  Datacenter:  A place where a collection of servers keep applications that are accessible to the clients through  internet.  The  HW  and  SW  are  handled simultaneously. Their size vary according to the usage;publically or privately, the earlier require a huge place than the latter. There are three types; small scale, medium size and extremely large where number of machines are deployed. A public level datacenter may be of medium-size place but not in case of Google or Microsoft where extremely large size place with lots of machines placed working round the clock as well elaborated in [22].
4)  Central Server: It has got pivotal role in cloud computing like monitoring the demands of client and look after the traffic. It uses certain protocols and “middleware”software which enable and ensure the uninterrupted communication among the networked computers. Cloud computing stores clients’ information at two places, in case of break down the whole system will not be effected and backup copy will suffice. For this purpose a double storage capacity is required to keep the data at two places. Central server can retrieve the data in case of failure as described in [23].
B. Service Models
Cloud computing allows users to access the HW/ SWservices and data resources and run their applications at a computing platform as explained in [24], figure 5 shows the architecture cloud computing in detail and their explanation is narrated in the subsequent paragraphs:-

1)  SW as a Service (SaaS):  The  distribution  of software is supported by the SaaS according to the specific demand. Through an internet a user remotely access the data and applications and pay before getting it. It provides the software, operating system (OS) and network. Salesforce and Microsoft’s Live Mesh are two examples among others which provide such services.
2)  Platform as a Service (PaaS):  It  provide  a platform where applications can be built, tested and deployed. It provides OS and network. Microsoft Azure and Google search engine are two examples among others.
3)  Infrastructure as a Service (IaaS): It is top layer over datacenters layer which enables the clients to access the HW, SW, storage, servers and the components of networking. Users pay for what service they have used individually thereby saving their money. Shrinking and expanding facilities are available dynamically. Here only network is provided. Amazon EC2 and S3 are two examples among others.
C. Cloud Service Deployment
There are certain services at different level which are being provided are shown in figure 6 and explained below:-
1)  Public Cloud:  It provides services publically which are accessible to everyone. It can be owned by any organization in public or private sector like official or semi-official institutions, academic or business setup etc.
2)  Private Cloud: Unlike public it is opposite to it as it is accessible by authorized single organization which may comprise of number of consumers. It may be owned by some authorized organization in public or private sector like official or semi-official institutions, academic or business setup etc.
3)  Community Cloud: Like private cloud it also enjoys the same facilities where a specific community could share secure  information  like  passwords,  policies,  security parameters etc. It may be owned by a single or more authorized organizations which belong to that specific community.
4)  Hybrid Cloud:  It is a combination of all above three clouds or may be two of them. The combined clouds keep their entity intact and set some rules under which they share data and information among them.


IV.  PERVASIVE AND CLOUD COMPUTING  – ASSOCIATION

As described earlier, pervasive computing creates an environment where all users (moving or static) communicate through the technology of wireless with the computing devices (embedded or portable). Where pervasive computing holds the characteristics of invisibility, wireless sensing, mobility, distributed and mobile computing, it suffers as well due to limited scalability, resources scarcity, limited availability, often disconnection, and limited energy power. Resultantly, it is deprived of number of applications like image/ speech detection/ recognition, translation of text, social networking, multimedia, etc. Cloud computing provides a viable solution to pervasive computing’s inherited issues with its capability of scalability, availability, abundance of resources, no disconnections and unlimited/ uninterrupted energy power.
Furthermore, cloud computing’s concept of offloading the data and computation to the provider of resources reduces a major burden over the pervasive computing. Figure 7 shows the association among the pervasive and cloud computing. Cloud computing overcome the issues of pervasive computing through following as explained in [25]:-
A. Resource Pooling
Cloud computing has the capability of high scale computing and storage, so pervasive computing can exploit these by execution of applications at low resource and with limited energy pervasive devices.
B. Availability
As cloud computing is available anywhere at any time so pervasive computing exploits this and overcome its inherited issue of limited availability and frequent disconnections.
C. Scalability
Cloud computing has the capability of high scalability. With minor modifications new clients and servers can be added to the infrastructure of cloud. It also opens the avenue of adding more services to the cloud. Pervasive computing exploits it as more mobile users can be served and more computing devices (embedded or portable) can be connected. 
The above discussion establishes the fact that pervasive computing can overcome its limitations once associated with cloud computing. The association among both creates an environment where a “Pervasive Cloud” takes a birth. It has got two components working hand-in-hand together to give out desired results, these are:-
A. Cloud Computing Component (CCC)
This component represents the environment of cloud computing. The services models SaaS, PaaS and IaaS provide all required services to the users in an environment of pervasive  computing.  CCC  also  ensures  that  main characteristics of pervasive computing namely invisibility, localized scalability and adaptive nature should not be
compromised at all cost.
B. Pervasive Computing Component (PCC)
This component represents the environment of pervasive computing. There are number of tasks which are to be performed to get uninterrupted services from CCC by sub-components. These tasks are to keep record of all available services, requests for services received and entertained, record of moving devices in different smart spaces.


V.  VIRTUAL CLOUD COMPUTING PLATFORM 

Although pervasive cloud computing platform provides the users all the services at their door step but there are two aspects which need attention. Firstly the services are dependent upon continuous availability of connection and secondly it may not be affordable to everyone being too expensive. If a framework of virtual pervasive cloud computing is formulated then these issues can be resolved. 
Framework detects nodes in the near vicinity and establishes a connection among them. These nodes (mobile devices) could act like virtual cloud computing provider. If one user need to download some information but failed due to no signals or expensive roaming charges then user can get desired data from other nodes in that area. In this way virtual cloud provider offers the same services of computing and offloading to the users as in [26].
All mobile devices in the smart spaces could be providers to others as their pervasiveness make them available all the time to other devices in the same area. Moreover, there is no cost effect involved once they create a community centre and share information at any time. The framework components are as under:-
A. Application Manager
It manages the loading time and offloading of an application. It controls the execution of an application once it is under the process of modification.
B. Resource Manager
On a local device it controls the profiling of an application. It also monitor the resources on local device. Every application gets its profile which is used for virtual cloud creation from the nearby available devices. The application manager always check the profile of an application before its execution.
C. Context manager
It uses and synchronizes the information of contextual nature from the devices’ Graphical User Interface (GUI). These GUIs are made available to other processes as well. Location and devices in the near vicinity are the two major contexts which require special attention. Application manager uses this contextual information for mobility traces and creation of cloud. This is made possible with the help of peer to peer component.
D. Peer-to-Peer Component
It has the responsibility to inform the context manager about the present strength of devices in an area and on entering or departure of any device. 
E. Offloading Manager 
It gets the requests from a device and forward to other devices to be accomplished. Same way it keep on getting the jobs from one side and get it done from other. It further detects the failures and again forwarding and getting done the things.


VI. CONCLUSION

This paper has been developed to discuss the association among the pervasive and cloud computing. At the initial stage, a fact has been established by making an in depth review of both the systems. As a result of review it is noticed that pervasive computing has certain weak areas which are required to be strengthened. Cloud computing review reveals that the weaknesses of pervasive computing could be rectified once both are integrated. There is an environment where both could work in harmony and with mutual consent and that is the pervasive cloud computing platform. Now this platform overcome the drawbacks of pervasive computing but new designing has its own issues for the user’s perspective. The connection non-availability, frequent disconnections and much expensiveness makes it difficult for the users.
The paper has further ponder upon the situation where a solution is formulated. The problem can be resolved through the creation of a virtual network where all devices present in their respective smart spaces could help each other. This could be possible by creating a framework of virtual cloud where the actual pervasive cloud will not be approached. One device could be helped out by another device through a virtual pervasive cloud computing.
There is no end to it as further research avenues are open in this field like enhancing of speed, localized Bluetooth networking, and on-line integrated mobile devices.


REFERENCES

[1]  M. Satyanarayanan, “Pervasive Computing: Vision and Challenges”, Carnegie Mellon Uni. IEEE, 2001, pp. 1.
[2]  Zohreh Sanaei et. al. “Hybrid Pervasive Mobile Cloud Computing: Toward Enhancing Invisibility”, Uni. of Malaya, 2013, pp. 1.
[3]  Lizhe Wang et. al. “Cloud computing: A Perspective study”, Rochester Institute of Tech. USA, 2008, pp. 3.
[4]  Krishna Nadiminti et. al. “Distributed Systems and Recent Innovations: Challenges and Benefits” Uni. of Melbourne, Australia, 2006, pp. 2-3.
[5]  M. Satyanarayanan, “Fundamental Challenges in Mobile Computing”, Carnegie Mellon Uni. USA, 1996, pp. 1-6.
[6]  Douglas B. Terry et. al. “Managing update conflicts in Bayou, a weakly connected replicated storage system”, in Proc. of 15 th ACM,
USA, 1992, pp. 1-11.
[7]  James J. Kistler and M. Satyanarayanan, “Disconnected Operation in the Coda File System”, Carnegie Mellon Uni. ACM Transactions on CS, vol. 10, No. 1, Feb. 1992, pp. 3-25.
[8]  Lily B. Mummert et. al. “Exploiting Weak Connectivity for Mobile File Access”, Carnegie Mellon Uni. 1995, pp. 1-12.
[9]  Armando Fox et. al. “Adapting to network and client variability via on-demand dynamic distillation”, Uni. of California, 1996, pp. 1-10.
[10] Brian D. Noble et. al. “Agile Application-Aware Adaptation for Mobility”, in Proc. of 16 th ACM, 1997, pp. 1-12.
[11] J. Flinn and M. Satyanarayanan, “Energy-aware adaptation for mobile applications”, 1999, para. 3, pp. 2-10.
[12] Alvin R. Lebeck et. al. “Power-Aware Page Allocation”, Duke uni. USA, 2000, pp. 4-10.
[13] Frances Yao et. al. “A scheduling model for reduced CPU energy”, IEEE, Uni. of Pittsburgh, 1995, pp. 374-382.
[14] Bill Schilit et. al. “Context-Aware Computing Applications” IEEE 95, 1994, pp. 85-89.
[15] R. Want et. al. “The Active Badge Location System” ACM Trans. On info. Sys. vol. 10, No. 1, England, 1992, pp. 96-98
[16] Miguel S. Familiar et. al. “Pervasive Smart Spaces and Environments: A Service-Oriented Middleware Architecture for Wireless Ad Hoc and Sensor Networks”, IJDSN, vol. 2012, pp. 1.
[17] R. Taylor, “Pervasive Systems, Invisibility and Mobility- Towards an Open Source Pervasive System Framework”, in Proc. WorldCIST’13, Portugal, 2013, pp. 17.
[18] Albrecht Schmidt et. al. “Interacting with 21st-Century Computers”, IEEE CS, 2012, pp. 3.
[19] Ahmed Youssef, “Towards Pervasive Computing Environments with Cloud Services”, IJASUC, vol.4, para. 2, Jun. 2013, pp. 2-5.
[20] Niroshinie Fernando et. al. “Mobile cloud computing - A survey” La Trobe Uni. Australia, para. 3.1, pp. 87.
[21] Dejan Kovachev et. al. “Mobile Cloud Computing- A Comparison of Application Models” RWTH, Germany, 2011, pp. 2.
[22] Michael Armbrust et. al. “A View of Cloud Computing”, comm. of ACM, vol. 53, Apr. 2010, pp. 51.
[23] Jonathan Stickland, “How cloud computing works”, [Online]. Available:http://computer.howstuffworks.com/cloud-computing/cloud computing4.htm, pp. 3.
[24] Hoang T. Dinh et. al. “A survey of mobile cloud computing - architecture, applications, and approaches”, NTU, Singapore, 2013, pp.5-6.
[25] Ahmed Youssef, “Towards Pervasive Computing Environments with Cloud Services”, IJASUC, vol.4, para. 4, Jun. 2013, pp. 2-5.
[26] Gonzalo Huerta-Canepa and Dongman Lee, “A virtual cloud computing provider for mobile devices”, KAIST, South Korea, 2010, para 3.3, pp. 3.

Cache Memory Coherence

Role of Cache Memory Coherence in Shared Memory Architecture
By
Rana Sohail
MSCS (Networking), MIT



Abstract— Communication among multi-processors is established through shared memory system which uses shared address space. Resultantly, memory traffic and memory latency increases. To increase the speed and decrease the memory traffic, a cache memory (buffer) is used which enhances the performance of system. Caches are helpful but require coherence among each other once functioning in multi-processors.  This paper presents the cache coherence problems, the available solutions and recommends measures in the real world scenario.

Keywords Multi-processor, shared memory, cache memory, cache coherence


I.      INTRODUCTION

     Cache can be defined as a unit which is used to store the data. The data which is frequently accessed and required in the near future is made available at cache level and no need to access the main memory. This saves the time and effort of the user. Once the cache is approached for data and it is available then it is said to be ‘cache hit’ and if the data is not there then it is called as ‘cache miss’. The data which is not available at cache is then collected from the main memory. The speed and performance is considered fast if maximum requests are made available by the cache memory as in [1].
      Cache is further divided into two groups namely; initiative cache and passive cache. The initiative cache must entertain all the requests either from its location in case data is available or gets it from main memory and sends the data to the users. The passive cache entertains the request only in case the data is available with itself otherwise does nothing as in [2].
      The paper has been distributed into sections. In section II, cache and its classification is defined. Section III describes the shared memory architecture. In section IV, cache coherence and some common issues are highlighted. Section V gives out the related work on the subject. In section VI, the paper will be concluded.     



II.    CACHE MEMORY – CLASSIFICATIONS



There are number of uses of cache memory therefore it is classified according to its uses. These classifications are helpful to two events namely; reading the data and calculating the data. The purpose of cache memory is to save the time of users and operating system as well as in [2]. The classification of cache memory is explained in subsequent paragraphs:--

A.    Local Cache
It is very small in size and present in the memory. Once the same resource is requested by multiple users then it plays its role to avoid such requests. Generally it is a hash table which is in the code of application program. Its function can be explained by an example; if names of the users are to be shown without having any knowledge of the names but the Ids only then local cache memory is the best solution to it.

B.    Local Shared – Memory Cache
It is a medium size locally shared memory which is applicable to the semi static and small data storage. It is best in performance once the user has to visit the data very fast.
C.    Distributed Cache Memory
It is large size cache memory which can be extendable at the time of requirement. The data in the cache memory is created only for the first time and when the data is cached at different places then problem of data inconsistency is not there.
D.   Disk Cache
Since the disk has relatively slow speed component therefore it is mainly appropriate for constant objects to be the disk cache. It is very useful for capturing lost cache through the procedure of error 404.       

III.      SHARED MEMORY ARCHITECTURE


A memory once approached by multiple programs then it is shared by all. All such programs have the intentions either to communicate among each other or to avoid redundant copies. Inter – program data traveling is very easy by shared memory architecture. It has got two aspects; the hardware and software perspective as explained in [3] which are as under:-
A. Hardware Perspective
The shared memory in a multiple processor system is a large block of RAM (random access memory) which is approached by multiple CPUs (central processing units). Since all programs share the single view of data therefore a shared memory system is very easy to program. Since many CPUs try to access the shared memory therefore two main issues arises:-
1) CPU to Memory Connection Bottleneck: Since a large number of processors approach for the collection of data from the shared memory and the connection between the CPU and shared memory does not have much of space so the situation of bottleneck is obvious.
2) Cache CoherenceThere are number of cache memory architectures which are accessed by multiple processors. Once any of them is updated and that has to be used by other processors then such a change should be reflected to other processors as well. Otherwise other processors would be working with incoherent data.
B. Software Perspective   
The software perspective in a shared memory can be explained as under:-
1) Inter Process Communication (IPC)It means a simultaneous process of exchanging the data by the processors which are running parallel. RAM is the place where if one process makes an area then others are at liberty to access that area.
2) Conserving Memory SpaceHere the shared memory is used as a method to preserve a memory space for the data.               
C. Centralized Shared Memory Architecture
This kind of architecture has got few processors chips which have small processor counts and these processors share a single centralized memory. It has large cache which is linked with memory bus. The memory bus joins the processors with main memory as in [4]. Figure 1 shows the Centralized Shared Memory Architecture.

D.    Distributed Shared Memory Architecture

This kind of architecture has got multiprocessors where the memory is distributed among them. The memory requirement of these processors increased so a distributed shared memory approach would be more appropriate as in [4] it is shown in figure 2.



IV.           CACHE COHERENCE AND COMMON ISSUES

A.    Definition
Once the data is stored in the local cache of the shared resources then the cache coherence is determined. Here the only problem which could hinder is the data inconsistency. If one client attains copy of memory block which is updated by another client and in case this new updated copy of that memory block is not shared with others then data inconsistency would occur. The solution of such problem is that local cache memory of every client should be inter-related among each other and that is possible by coherency mechanism. Figure 3 shows the multiple caches of shared resources.


B. Importance of Cache Coherence
The importance of cache coherence as explained in [5] can be determined through following:-
1) Consistency being the most important factor is considered.
2) Multiprocessors perform its task on the shared bus system. The bus is always full of data travelling traffic. The local and private cache simultaneously works on it and work load is tremendously reduced.
3) The shared bus system is always monitored by the cache controller which keeps an eye over all the transactions and takes action as per the instructions.
4) All the cache coherence protocols are bound to have the specification of block state in the local cache for future requirement.
C. Achieving Cache Coherence
The completeness of the process is done through four actions which are as under:-
1) Read Hit
2) Read Miss
3) Write Hit
4) Write Miss
D. Common Issues
There are number of problem areas where cache coherence needs to be improved and addressed. Following are the areas identified as common issues:-
1) Performance: The performance of the computer is always related to the multiprocessors and running programs. All the programs wanted to get priority and overload the bus system.
2) Processors Stalls: The cache receive the data from input device and the output device read out the data from it. Both I/O devices and processor observe the same data. Once the processor stalls due to some dependency of structure, data or control then problem occurs.  
3) State – Data Problem: The I/O devices deal with main memory and no problem is observed. But when I/O devices deal with the cache memory then state- data problem arises.
E. Recommendations
There are certain solutions available which address the issues comfortably. Following policies explains the issue in detail as elaborated in [4]:-
1) Write Back: It is also known as write behind where at the initial stage the writing is done to cache only. Once the data is to be replaced or modified by some new contents than the cache block data is amended and writing to the backing store is also carried out. It has got different implementation which is more complex as compared to the others. It keeps the track of those locations which are to be over written in near future and has to mark them as ‘dirty’ for later writing to backing store. Once the data in the cache are to be evicted then same data is required to be written to the backing store. It uses the ‘write allocate’ once confirmed that the subsequent write will be written to the same location.       
2) Write Through: This is a situation where write is carried out to the cache and backing store at once. It uses the ‘no write allocate’ as there is no requirement of subsequent write.
3) Directory Based Protocol: In this protocol there is a directory which have the shared data of all the processor caches. The directory behaves like a lookup table where all processor look for data updating. The directories keeps the record as a pointer which contains a dirty bit specifying about permissions. This is further categorized as full map, limited and chained directories as in [6]. 
4) Snooping Based Protocol: The cache has to monitor the address lines of a shared bus for all the memory accesses made by the processors. It has two categories known as ‘write invalidate’ and ‘write update’ as in [6]. 

V.       RELATED WORK

There are number of research work which has already been done in the past which is related to what is highlighted in this paper. These are as under:-
A.    Write Back
Lei Li et al. researched over the memory friendly write back and the pre-fetch policies through an experiment which showed the overall improved performance without a last level cache as in [7].
Eager write back reduced the write-induced interference. It writes back the dirty cache blocks in the least recently used (LRU) position and this done once the bus is idle as in [8].
B.    Write Through
Inderpreet et al. researched on graphical processor units (GPU) where ‘Write through’ protocols performance was better than ‘Write back’ protocol. Write back had a drawback of increased traffic as in [9].
P. Bharathi et al. worked over Cache architecture which was a way tagged cache. It showed an improvement of write through caches’ energy efficiency as in [10].
C.    Directory Based Protocol
Stanford Dash worked on the directory based system where every node of directory was processor’s cluster containing a portion of complete memory as in [11].
Scott et al. researched on large scale multiprocessors where state and message overhead are reduced as in [12].  
Hierarchical DDM design was designed over the directories hierarchy system every level used a bus with snoop operation as in [13].
Compaq Piranha implemented the hierarchy directory coherence which has an on chip crossbar as in [14].
D.   Snooping Based Protocol
Barraso et al. worked on the greedily ordered protocol which carried out a comparison on the directory based ring and split transaction bus protocols as explained in [15, 16].
IBM’s Power 4 and Power 5 also researched over the combined snooping response on the ring. On coherence conflict the node retries as in [17, 18 and 19].
Strauss et al. worked on the flexible snooping based on bus. Here snoop is performed first and then request is forwarded to the next node in the ring and in this saved time. As in [20].

vI.    CONCLUSION
       Cache memory is a link between the main memory and processor and its main aim is to save time of the users and the system. The multiprocessors have their own memory which is locally updated by the processors. Once the data is updated then the main memory has to be updated as well. The role of cache coherence is very important in this regard. The paper has been organized to highlight the working and importance of centralized and distributed shared memory architectures and how cache coherence could be achieved.



REFERENCES
[1] Definition Cache website. [Online]. Available: http://en.wikipedia.org/wiki/Cache_(computing)
[2] Zheng Ying, “Research on the Role of Cache and Its Control Policies in Software Application Level Optimization”, Inner Mongolia University for Nationalities, China, 2012, pp. 18-20.
[3] Shared Memory website [Online] Available: http://en.wikipedia.org/wiki/Shared_memory
[4] Sujit Deshpande, Priya Ravale et. al. “Cache Coherence in Centralized Shared Memory and Distributed Shared Memory Architectures”, Solapur University, India, 2010, pp. 40.
[5] James Archibald and Jean-Loup Baer, “Cache Coherence Protocols:  Evaluation Using a Multiprocessor Simulation Model”, University of Washington, USA, 1986, pp.274-282.
[6] Samaher, S.Soomro, et al.  “Snoopy and Directory Based Cache Coherence Protocols: A Critical Analysis”, Journal of Information & Communication Technology Vol. 4, No. 1, Saudi Arabia, 2010.
[7] Lei Li, Wei Zhang et al. “Cache Performance Optimization for SoC Vedio Applications” Journal of Multimedia, Vol 9, No.7, 2014, China, pp. 926-933.
[8] Lee, Tyson, et al “Eager writeback - a technique for improving bandwidth utilization” Proceedings-33rd annual ACM/IEEE intl. symposium on Microarchitecture, USA, 2000, pp. 11–21.
[9] Inderpreet, Arrvindh et al. “Cache Coherence for GPU Architectures” 2013.
[10] P.Bharathi and Praveen, “Way Tagged L2 Cache Architecture UnderWrite through Policy” IJECEAR Vol. 2, SP-1, USA, 2014, pp. 86-89.
[11] D. Lenoski, J. Laudon, et al.  “The Stanford DASH Multiprocessor” IEEE Computer, 25(3):63–79, Mar. 1992.
[12] S. L. Scott and J. R. Goodman, “Performance of Pruning-Cache Directories for Large-Scale Multiprocessors” IEEE Transactions on Parallel and Distributed Systems, 4(5):520–534, May 1993.
[13] E. Hagersten, A. Landin, et al. “DDM–A Cache-Only Memory Architecture” IEEE Computer, 25(9):44–54, Sept. 1992.
[14] L. A. Barroso, K. Gharachorloo, et al. “Piranha: A Scalable Architecture Based on Single-Chip Multiprocessing” In Proceedings 27th Anu. Intl. Symposium on Computer Architecture, 2000, pp. 282–293.
[15] L. A. Barroso and M. Dubois, “Cache Coherence on a Slotted Ring” In Proceedings of Intl. Conf. on Parallel Processing, 1991, pp. 230–237.
[16] L. A. Barroso and M. Dubois, “The Performance of Cache-Coherent Ring-based Multiprocessors” In Proceedings of 20th Anu. Intl. Symposium on Computer Architecture, 1993, pp. 268–277.
[17] J. M. Tendler, S. Dodson, et al. “POWER4 System Microarchitecture” IBM Server Group Whitepaper, Oct. 2001.
[18] B. Sinharoy, R. Kalla, et al. “Power5 System Microarchitecture” IBM Journal of Research and Development, 49(4), 2005.
[19] S. Kunkel. “IBM Future Processor Performance” Server Group. Personal Communication, 2006
[20] K. Strauss, X. Shen, et al. “Flexible Snooping: Adaptive Forwarding and Filtering of Snoops in Embedded-Ring Multiprocessors” In Proceedings of 33rd Anu. Intl. Symposium on Computer Architecture, Jun 2006.

Phonemic Learning – An In-Depth Study Introduction Learning, a non-ending phenomenon starts from the cradle and ends in the grave. Huma...

Popular Posts