EssayScam ForumEssayScam.org
Unanswered      
  
Posts by LawStudy / Posting Activity: 1
I am: Student / United Kingdom 
Joined: Aug 24, 2018
Last Post: Oct 29, 2019
Threads: 7
Posts: -  
Displayed posts: 7
sort: Latest first   Oldest first   |
LawStudy   
Oct 29, 2019

Introduction

The relationship between acculturation, colonialism, and education is a formidable one, with cultural imperialism through educational structures markedly visible even in the twenty-first century (Slowey 5). As a dominant form of socialization, the education of children has frequently served as a mechanism for nation-building throughout human history, with curricula and institutional forces aimed at cultivating a loyal citizenry. The most brutal instances of attempted acculturation through education, however, have sought to eradicate the norms, traditions, and practices of indigenous cultures by using children as a medium for cultural imperialism. Canada's residential school system, at its inception, served as an attempt to address the perceived socioeconomic and sociopolitical needs of a young nation that was seeking cultural cohesion (Slowey 5). The education of youth through the new, poorly structured and grossly misguided system was motivated by various economic forces, with current discourse regarding residential schools highlighting the atrocities which pervaded the system. The following inquiry examines the economic motivations behind Canadian residential schools, asserting the perceived importance of acculturation due to shifts within the economic sphere relative to resources and the needs of a growing and shifting population. This work theorizes that the resonating effects of residential school policies continue to impact the Aboriginal people in Canada, with the same economic motivators which shaped the system during the nineteenth century persisting in the current globalized marketplace.

Canadian StudentThe residential school system has now been broadly and fervently condemned for a wide spectrum of abuses, with the very theoretical ground on which the system rested significantly immoral and indicative of the proverbial White Man's Burden to educate colonized populations (Fear-Segal 32). Slowey posits that while the nation of Canada has sought to remedy the ills associated with residential schools and colonization, the Aboriginal people in the nation continue to be affected by economically driven policies which undermine their autonomy. Moreover, these policies exist under the guise of self-governance, with the surface-level agenda being to wield diversity as a resource rather than a liability through fortification of the Aboriginal identity. Slowey describes this process as follows:

"In this era of globalization, in which corporations assume a more dominant role in all spheres of life, the Canadian government is involved in a process of significant restructuring driven by a neoliberal agenda. In accordance with this vision of a minimalist state and unfettered market-driven development, self-government is being promoted as a means for political autonomy as well as for economic development in Aboriginal communities--all considered critical elements of "decolonization." Through the process of devolution, the state is, arguably, promoting self-governance as a means to enhance the opportunity of Aboriginal peoples to enter the market society and liberate them from traditional colonial constraints (5)."

Neocolonialism, the same author posits, represents forces similar to the more blatant colonialism that created the residential school system, with neocolonialism continuing to suppress the Aboriginal people while the Canadian government makes apologies for the residential schools.

Canadian Residential Schools' Emergence



The legacy of residential schools is an unfortunate one, with scholars suggesting that out of all the actions against the Aboriginal people during the nineteenth century for which the Canadian government has sought reconciliation, the residential schools were the most egregious (Dyck and Tanner 7). The Department of Indian Affairs (DIA) charged four Christian churches to run the residential school system in the 1880s, with the associated policies interfering not only with Aboriginal culture but also their governance and economic development. Irrefutably, the motivations for structuring residential schools were socioeconomic in nature, with colonialism always sourced from a dangerous combination of economic greed, resource imbalance, and a perceived inequality between the colonizer and the colonized.

The residential schools were unapologetically structured as a channel for assimilating the Aboriginal people to the dominant, Euro-centric culture. Residential schools yielded two effects that were not intended by policy-makers; these were the justified anger of the Aboriginal people against the Canadian government and the widely visible message that the government did not value the culture of the indigenous population. Unfortunately, it was the youngest generations of the Aboriginal people who suffered most immediately, with significant instances of child abuse, neglect, and poor educational practices now well-documented (Dyck and Tanner 8).

Emerging as early as the late 1700s within New Brunswick, the original residential schools were masked as missionary efforts. The residential schools spread throughout the nineteenth century, with missionaries aiming to convert First Nations people by targeting children. Notably, the efforts of the churches were not universally sourced from ill-will toward the indigenous people, with legitimate concern for the children's safety now acknowledged as very real during the early decades of the nineteenth century. The displacement of the Aboriginals had begun long before the inception of residential schools, with the churches tasked by the DIA acknowledging that the Aboriginal children were in danger due to their social immobility (Smith 6).

The economic motivations for acculturation, however, were not sourced from a desire to protect the First Nations children; they were driven first and foremost by incompatibilities between the cultures of the colonizers and the colonized. The displacement of the Aboriginal people occurred geographically first, with the colonists pushing them off of their lands due to desired resources. Understandably, the fragility affecting the now socially immobile Aboriginal people undermined their cultural cohesion and created conditions through which they could be even more easily dominated. Substance abuse became rampant in many, though certainly not all, Aboriginal societies, with the missionaries acknowledging that the social environment and economic instability affecting the indigenous children was not an optimal context through which children can be educated and raised (Smith 6). Ultimately, it was the colonizing population, namely the white, Eurocentric population, which created the severe instability for the Aboriginal people that then warranted cohesive, political efforts to address the emerging problems.

The residential schools then emerged in order to serve the needs of the colonizers rather than the First Nations people (Schabas 4). The instability created by the displacement of the Aboriginals sourced a wide spectrum of problems, with the remaining desire to assimilate the population to the dominant population additionally informing the residential school efforts. The First Nations people had been robbed of their autonomy, unable to sustain themselves in the new and unfamiliar political, social, and economic environment, with the residential schools emerging as a channel for reducing the new, perceived burden on Euro-Canadian society.

Primarily, residential schools were not a means of protecting indigenous children but, more saliently, a means of assimilating, and by extension controlling, their futures. The residential schools were structured under the language of self-sufficiency, a language which continues to inform Aboriginal policy in Canada, with the treaties between the First Nations people and the colonizers directly informing the context of the schools. Politically, it was these treaties, many of which were signed during the 1870s in order to promote more effective assimilation efforts, which cited the government's responsibility to place schools on the new First Nations reservations. Schabas articulates the initial legislation as follows: "The initial legislation was called the Gradual Civilization Act and the ideological underpinning was assimilationist. The schools' proponents pledged they would end the "Indian problem" within a few generations. More than 150,000 young Native people were placed in such institutions. They were often forbidden to speak their language and practise their culture, and were subjected to various forms of ill-treatment and brutality" (5).

Most of the stakeholders in the residential school system during the latter decades of the nineteenth century acknowledged the very real economic needs of the First Nations people; they had been displaced from their land, were unable to meaningfully compete in the new marketplace, and needed to survive the irrevocably degradation of their own economy. In many areas, the colonized people had a longstanding economy which was bison-based and thrived as at least partially migratory for generations; this was eradicated in a comparatively short period of time by the colonists. The new schooling policies would create an opportunity for Aboriginal people to survive in their new environment, with the language of the legislation focusing on the positive changes the schools would create.

The emerging schools, however, were located external to First Nations communities, with evidence now suggesting that this was an intentional effort to more aggressively assimilate the indigenous people by generationally fragmenting their society. Students had inadequate healthcare, food, and educational resources, with atrocious evidence highlighting gross neglect and abuse in the schools. DeLeeuw describes the residential school system discourse during the 1870s in terms of aggressive civilization, with the removal of children from their families the principle feature of residential schools: "....Aboriginal children were 'kept constantly within the circle of civilized conditions' where they would receive the 'care of a mother' and an education that would prepare them for a life in modernizing Canada....Confined and specific sites were vital in the transmission and enactment of colonial ideologies" (345). In the absence of their social support network and faced with a network of sexual and physical abuses, the students of residential schools were not acculturated, as was the intention, but served as a viable source of animosity toward the Canadian government within Aboriginal societies.

The failed residential schools were a growing problem throughout the former half of the twentieth century, with the government beginning to close them during the 1940s. The effort to eradicate the schools was a slow one, however, with the last residential school in Saskatchewan closing in 1996. It was not until the late 1990s when fervent research begun on the atrocities affecting residential school students, with these instances of abuse now well-documented. The churches tasked with running the schools were the first to apologize and support reconciliation, with the Canadian government not targeted for its role in the process until much later. The Truth and Reconciliation Commission (TRC) was tasked with the need to examine the unfortunate history of the residential school system and its legacy, emerging from the Indian Residential Schools Settlement Agreement (IRSSA) in 2007 (Schabas 3). The established compensation system aims to serve as reconciliation for the brutal impact of the residential schools, with survivors of the system provided with $10,000 compensation in addition to $3,000 for each year of attendance; these amounts have been condemned, however, for never being able to heal the physical, mental, and emotional damage done to the students of residential schools.

Economic Change, Education, and the White Man's Burden



Economic changes resultant from colonization processes rarely benefit the indigenous population. The residential schools aimed to address the instabilities of the economic environment emerging during the colonization process which had created a wide gap between the sufficiency of the dominating Euro-Canadian population and the instability of the socially immobile First Nations people. The movement of the students away from their communities and families as well as the gross lack of resources, accountability, and general integrity had by the schools were indicative of a total dismissal of the humanity of the indigenous population. Fear-Segal posits that the Euro-American and Euro-Canadian populations' educational efforts against the indigenous people in the United States and Canada were sourced from a genuine desire to cultivate contributing citizens juxtaposed with a desire to maintain their inferiority; the policy makers wanted to reduce the economic burden on society but not allow them to flourish (32). Fear-Segal describes this as follows:

"The government's new commitment to educating all Indians and assimilating them into the Republic preempted the answer to a question that had been long debated and still haunted the minds of many white Americans. Could white schooling prepare native children for equal citizenship? ...Advocates of federal Indian education were engaged in a new and controversial venture..... Yet the common goal of reformers masked a fundamental division in their perceptions and constructions of the Indian. In deliberating over how to transform the Indian, they were forced to broach the thorny problem of racial difference; to ask not only in what ways Indians were dissimilar to whites but to confront the essence and source of that difference (31)."

Conveniently, the guise of morality and religious duty served the economic needs of the dominant European colonist population very well, with their socioeconomic and sociopolitical superiority ensured by the use of poor-quality education, forced displacement of the younger generation, and abuse that would reduce the life quality of First Nations children as they grew.

The intention was to cultivate a dependent, inferior citizenry that would never be allowed to flourish but would be less of an economic burden. The ideology of the residential schools was grounded in Enlightenment ideals through which the civilized Christians were tasked with acculturating, and thereby saving, the indigenous people. The Enlightenment ideas, particularly as they evolved throughout the nineteenth and twentieth centuries, were markedly dangerous, as they served to ground prejudice in science. Fear-Segal describes this ideology as follows: "Recruited to classify different societies into a hierarchical scheme, "science" played an important role in suggesting that nonwhite "savages" were socially inferior to members of civilized society and that their social inferiority had a biological or racial counterpart" (32). The nature of this discourse created an insurmountable barrier to the colonized people, as they were characterized as having permanent traits which rendered their inferiority innate and irrevocable.

The expanding European empires coincided with spreading capitalist practices throughout Western society, with consequently mounting competition creating conditions conducive to colonialism. The desire for greater resources, and by extension more land, paralleled social issues such as need for religious freedom and the duty to civilize those perceived as savage (DeLeeuw 340). In Canada, the colonial actions targeting the Aboriginals destroyed their existing social, economic, and political practices, with the residential schools a channel for social and economic control. DeLeeuw posits that geographic incursion coincides with the destruction of entire societies within a rapid period of time during the process of colonization, with the ideologies supporting economic gain justifying racial prejudices (341). In turn, the language of inferiority then allows for the dominating population to enact any number of atrocities against the colonized population. DeLeeuw describes this interaction between social and economic ideologies as follows:

"Colonial action, however, requires an ideological framework of explanation and rationalization. Such ideological frameworks, as many post-colonial theorists have argued, are comprised of nuanced social practices and cultural iterations which insist that (particularly non-white) non-Euro colonial peoples, and all elements of their existences, are flawed and inherently inferior.... To think of this in another way, it is helpful to understand colonialism, like racism, as a set of practices and outcomes arising from the cumulative merger of thoughts, discursive iterations and bureaucracies or laws.... These constructions then informed, and manifested into, structural undertakings, including residential schooling and (en)forced colonial education (345)."

Residential schools were grounded in the assumption that the indigenous people were in need of transformation; from one perspective, this was true in that the colonists had created an environment to which the indigenous people of the land were not accustomed and within which they could not sustain themselves.

Relevance of Residential Schools to Modern Aboriginal Politics Within the Global Marketplace



While the historic colonial period has ended within the global community, the relevance of the economic circumstances affecting Aboriginal politics during the late nineteenth century persists in the twenty-first century. Aboriginal politics in Canada mirror relations between majority and minority populations around the world, particularly in areas having a history of colonialism. The language of neocolonialism has emerged with respect to Aboriginal politics in Canada, with the resonating impact of residential schools highlighted as a more blatant manifestation of similar relations, now less visible, between First Nations people and the Canadian government. Kulchyski cites that Aboriginal politics in Canada are a "specific terrain of struggle" which is complex and inextricably bound to economics (8).

The Canadian government has sought to, from its inception, accumulate capital by constructing a set of circumstances for the First Nations people that maintain their inferiority. Kulchyski describes these circumstances as sets of temporal and spatial actions which form barriers to the accumulation of capital for Aboriginal people (9). The ways in which the Canadian government most immediately impeded the First Nations people from accumulating capital was via geographic displacement and then later, abusive policies in the name of education: "The most notable struggles of Aboriginal peoples against capital have been over the exploitation of non-renewable resources in the Canadian hinterland, and the resulting environmental impact such exploitation has, including the negative consequences for those whose material livelihood depends upon a subsistence economy directly linked to the land as the means of subsistence" (9). The same author suggests that the economic motivations behind residential schools, which separated and subjugated the indigenous societies in name of self-sufficiency, continue to persist in many current policies which seek to economically afford Aboriginals in Canada with job and education access.

Self-sufficiency, specifically, emerged within the language of residential school policy during the late nineteenth century, with the language of self-sufficiency persisting in the twenty-first century (Slowey 5). Slowey cites that the Aboriginal people in Canada are generally more dependent on the state, and specifically on self-government native policies, and the current neoliberal agenda will undermine the well-being of First Nations society due to the weakening of the nation-state in general within the international political community (5). Current Aboriginal politics are defined by new partnership encouragement, new fiscal relationships, and greater autonomy in governance within Aboriginal societies in order to bolster self-sufficiency. Specific manifestations of these ideals, however, are very much in line with corporate interests and the desire to control indigenous land (Slowey 5). Motivations similar to those which created residential schools, in short, continue to exist within Canada, with self-sufficiency providing a guise for serving economic interests of the dominant population.

Conclusions

The residential schools which were so detrimental to the indigenous populations of Canada during the nineteenth and twentieth century represented a perceived, albeit very poorly planned, solution to problems affecting the post-colonial Canadian environment; this environment was marked by declining stability among the Aboriginal people's societies due to forced, drastic changes to their world including displacement, ill treatment, and policies which subjugated them severely. Residential schools would fulfill the need, it was assumed, to acculturate the First Nations children through educating them, creating a Canadian citizenry that was loyal to the state but not positioned to become socially or economically superior to the dominant population.

Economic changes to the Aboriginal way of life included a total dissolution of their longstanding practices which did not afford them a place within the new Euro-Canadian society. The schools allowed for acculturation to take place which would frame Aboriginals as economic contributors, but the aggressive practices of removing them from their homes and ways of life undermined their emotional stability. In the absence of necessary resources, residential schools floundered and represented an egregious violation of human rights for Aboriginal children and their families. This inquiry concludes that while the relationship between education, assimilation, and colonization is well-known, the desire to truly render the First Nations children as contributing citizens was affected significantly by a concurrent desire to maintain their inferiority, a social position which was maintained by ideologies which framed their lower status as inborn. Lower quality education would allow them to sustain themselves, or so the policy makers assumed, but never transcend the instilled cycle of social immobility. Most saliently, the language of self-sufficiency continues to impact Aboriginal politics in the twenty-first century, with scholars positing that the imposition of job and educational opportunities which are not culturally competent actually undermines First Nations people's ability to be empowered in the global marketplace.

Works Cited

De Leeuw, Sarah. "Intimate Colonialisms: The Material and Experienced Places of British Columbia's Residential Schools." The Canadian Geographer 51.3 (2007): 339-355.

Dyck, Noel, and Adrian Tanner. "[Differing Visions: Administering Indian Residential Schooling in Prince Albert, 1867-1995]." Anthropologica 40.2 (1998). Questia.

Fear-Segal, Jacquelline. White Man's Club: Schools, Race, and the Struggle of Indian Acculturation. Lincoln, NE: University of Nebraska, 2007. Questia. Web.

Kulchyski, Peter. "Aboriginal Peoples and Hegemony in Canada." Journal of Canadian Studies 30.1 (1995). 7.

Schabas, William A. "Truth vs. Reconciliation? as Canada's Residential Schools Commission Launches, Worldwide Precedents Suggest We Might Not Get Both." Literary Review of Canada Nov. 2010: 3-11.

Slowey, Gabrielle A. "Globalization and Self-Government: Impacts and Implications for First Nations in Canada." American Review of Canadian Studies (2001). 5.

Smith, Derek G. "The "Policy of Aggressive Civilization" and Projects of Governance in Roman Catholic Industrial Schools for Native Peoples in Canada, 1870-95." Anthropologica 43.2 (2001). 6.
LawStudy   
Oct 22, 2019

At present, Ubitools offers a wide variety of pervasive computing choices for the medical field, including the presence clock. While the presence clock has the ability to detect changes in routine in elderly persons and thus alert caregivers who are monitoring their movements, UbiTools is currently considering the development of an improved presence clock with additional applications that would monitor other aspects of the individual's life, such as whether or not there are clean towels, whether or not there is food in the refrigerator, how high (or low) the thermostat is set, and so forth. All data would be monitored via downloadable applications which could reside on the monitor's tablet computer (e.g. an iPad) and give notice when some aspect of the household is not according to set parameters. Given the potential inherent in this technology - the ability to monitor and protect not only elderly persons, but also physically and/or mentally disabled individuals as well as children - it makes sense for UbiTools to continue research and development into the improvement of the presence clock, as well as other ubiquitous computing technologies that can help safeguard the lives of millions of vulnerable individuals.

Technology EthicsOpponents of such technologies launch the typical arguments against same: namely, that they take away self-determination for individuals; that they have the potential to be used by governments and corporations to monitor the lives of "ordinary" citizens who do not need such surveillance; that they introduce an aspect of "Big Brother" into a society that is presently struggling to maintain autonomy in the face of increased oversight. From such opposition springs UbiTools' conflict over whether or not to proceed in developing and marketing the improved presence clock (as well as other similar technologies in the future). This report will seek to convince the CEO that it is not only prudent, but necessary in a business sense, for UbiTools to continue to develop not only improvements in the presence clock, but also further types of ubiquitous computing technologies, as this is a trend that will not change. First, the report will detail some of the current (and potential) uses of ubiquitous computing in the health care fields. Next, it will present some of the opposing arguments against the use of such technology, followed by a discussion of ways corporations and individuals can build and implement such technologies to minimize the fear of negative applications of same. Next, thoughts concerning the general trade-off between privacy and security regarding pervasive technology will be presented, as will some of the benefits and limitations of ubiquitous computing. Finally, recommendations for developing and marketing the improved presence clocks will be offered. It is the hope of the author that the CEO of UbiTools will take this report into consideration before making any decisions regarding the presence clock.

Current and Potential Uses of Ubiquitous Computing in the Health Care Fields



Before engaging in a review of the current application of pervasive computing, it makes sense to note that the concept of "remote doctoring" is decades old. "Dowler and Hall considers TLM [telemedicine] being started as telediagnostics by psychiatrists as early as 1955 using interactive television" (Dowler & Hall, as cited in Rashvand, Salcedo, Sanchez, & Iliescu 2008, p. 238). However, the radio was used even earlier than that; in 1924, magazine covers heralding the arrival of the "Radio Doctor" were published (Rashvand, Salcedo, Sanchez, & Iliescu 2008). Thus, it is important to remember that computers merely improve and continue a tradition that began a long time ago. At present, Rashvand, Salcedo, Sanchez, and Iliescu down the current applications into four categories: Telecare Services; Continuous Care Support Service; Information Service to the Citizens; and Training and Provision of Information Services to Medical Staff.

Having said this, however, it is clear that ubiquitous computers are allowing a blossoming of medical-related technology. Given that, as well as the concurrence of aging populations, it stands to reason that there has been an influx of new ways to apply these technologies to health care for elderly people (Haux et al 2008). However, this confluence is not just an opportunity for growth; it is, in fact, becoming increasingly clear that "informatics support through health-enabling technologies leading to pervasive health care is one important option to be seriously considered" (Haux et al 2008, p. 79), since ubiquitous computing is becoming the norm, rather than the exception. Current applications include monitoring for emergencies, which can create alarms sent to caregivers; the management of disease (e.g. reminders to take insulin injections); and the ability to provide advice regarding general health care issues; potential applications include the ability to give often isolated elderly people social interaction, entertainment, education, and assistance with daily activities.

Stanley and Osgood (2011) note that sensor-based technologies can be used by an individual to monitor him/herself as well, keeping track of physiological functions, giving data regarding social networks, improving compliance with medical regimes by providing useful data, and so forth. Such sensor-based technology can also be tapped into by caregivers and health-care professionals, such that the collected data can be analyzed to determine everything from possible changes in medicine to whether or not an individual has fallen and is unable to get help. Kim et al (2008) take this a step further and explore a "stand-alone ubiquitous evolvable hardware (u-EHW) system" that consists of an embedded processor, a computer chip, and a hand-held terminal that essentially allows for a "mobile ECG" to be done, on a continuous basis, so that heart disease can be monitored all of the time, as opposed to only in periodic appointments when something could be missed. Beyond monitoring, wearable devices can not only keep track of particular medical conditions, but can also provide treatment.

The context-aware mobile communication system (CHIS) is another application of ubiquitous computing technology when applied in a health care context. It allows context-aware messages to be sent among hospital personnel, so that messages are not delivered unless certain conditions are met; it allows monitors to be set up that, when specific people pass by them, display information relevant to those particular people (e.g. when a nurse passes by, it might be set to display all of the information about her patients); and it can allow people to monitor their own privacy, in an interesting twist (meaning that people can set the device to divulge to others only certain information about, say, their location). While this device is being tested in hospitals at present, the application to a home setting is clear. For example, messages could be exchanged between caregivers and the people whom they are watching, or those being monitored might hide their location from their watchers when they use the bathroom.

These are just a few of the current health-care related applications of pervasive computing. While a comprehensive review is beyond the scope of this report, it does bear saying that one review of the literature, written in 2008, listed a full 67 different systems and devices currently on the market. Their review was not meant to be exhaustive, meaning that these were merely their own selections of devices that had been written about in previously published studies, indicating that the actual number of items on the market was, of course, much higher.

Arguments Against the Use of Ubiquitous Computing



Arguments against the use of ubiquitous computer technology range from the practical to the more abstract. In terms of the latter, one of the largest concerns is that "technology is altering, or defiling, the sacredness of human life" (Winter 2008, p. 200). While one might consider this an argument of those who are too old, too obstinate, or too "religious" to be considered part of the mainstream, the fact is that many ethicists believe that the very definition of what it means to be human will necessarily change - and must be addressed - in the coming years, as pervasive computing surrounds us with more intensity and volume. What will it mean when we are literally surrounded by machines that are smarter than we are? What will it mean when humans no longer make decisions related to, for example, our own health care? These and other questions might seem ridiculous now, but when one imagines a world in which we are outnumbered by intelligent devices we ourselves have set in place, it is clear that a redefinition is far better done sooner than later; and the concern, of course, is that we will not get around to it until too late.

Opponents who argue against the use of ubiquitous computing see this future world as "sinister" (Pimple 2011, p. 29). They worry that when machines are created to think for themselves, those who created the machines will be exonerated if and when things go wrong, allowing them a sort of "moral passcode" for doing as they wish. They worry that the very nature of pervasive computing devices leads their users down the road to "coercion, surveillance, and control" (Shilton, as cited in Pimple 2011, p. 30), much as the very existence of nuclear weapons is concerning to those who believe that "absolute power corrupts absolutely." Moreover, they are concerned about issues such as consent, and cases in which an agreement to use ubiquitous computing devices, such as the presence clock, might appear to be consensual on the outside, but are not.

Building Pervasive Information Systems Without Negative Implications



The primary way for ubiquitous computing devices and systems to be designed and implemented without the taint of coercion or undue control is deceptively simple: ensure that all parties involved consent to their use. However, it is also clear, from the arguments presented above, that this is easier said than done. What is consent to a 90-year-old woman who is terrified to say no to anything her son proposes because he's threatened to lock her up in a nursing home the minute she does? What is consent to a mentally capable, but physically incapacitated, teenager whose parents are sick of him and would rather lock him in a room with a monitor than interact with him?

One can create such scenarios almost ad infinitum. Having said that, it is also the case that one can also do so with any technology in the world. Sell a gun and the manufacturer might be responsible for a break-in where three people are murdered, or it might be responsible for saving the lives of soldiers in Iraq. Can the manufacturer be held liable for either action? Regulations differ from nation to nation, but that is indeed where to begin. Thus, those in the ubiquitous computing industry, like UbiTools, are well advised to be proactive when it comes to regulation. This will be addressed in more detail in the marketing section below, but this is indeed the first step in how to build such systems that are as free as possible from possible negative uses. The second step is to openly engage in direct dialogue with the public about all of the ramifications, good and bad, about pervasive computing devices and systems, and show how the good far outweighs the bad. For example, researchers concerned with "granny cams" and other surveillance devices that have been controversial in the past suggest that

in addition to a research mission statement, a data safety monitoring board comprised of research investigators, nursing home administration and staff, as well as cognitively intact residents, family members, and an ethicist, could be implemented by the LTC facility to regularly and proactively identify and remedy potential problems that arise during the course of a research project. (Bharucha et al 2006, p. 619)

Finally, safeguards can be built into the technology itself, such that, for example, unauthorized users who attempt to change or otherwise manipulate the devices cause them to shut down and alert those who are "monitoring the monitors" (e.g. police, health officials, etc.). Such safeguards can and should be made public knowledge so as to deter crime as opposed to providing a tool to solve it, much as the advertisement of alarm systems on fancy homes serves to deter those who would break into them.

The Trade-Off Between Privacy and Security for Pervasive Computing: Some Thoughts



In a small town, the cliché goes, a child can steal a candy bar from a store on one end of Main Street at 4 PM, and his mother will know about it ten minutes later, before he gets to his home on the other end of Main Street. This is because everyone knows everyone else; everyone is looking out for everyone else; and people feel a sense of responsibility toward each other to inform them of goings-on that concern them. Some might argue that in small towns, there is a loss of privacy because of this "snooping" mentality; such people are more comfortable in the anonymity of large cities where no one appears to care what they are doing. For those people, the devices and systems associated with pervasive computing are definitely not worthwhile; to them, privacy trumps security, and they would rather risk a stolen car than living with the knowledge that someone, somewhere, might well be watching to see who might be stealing their car.

And that is all well and good. Such technologies, as with all technologies, have the potential to be freely chosen, just as people will choose to live in a small town or a big city, depending upon their personal goals and inclinations. The word "potential" is used because, as discussed in the arguments against ubiquitous computing, it is certainly possible to misuse such technologies and turn them into instruments of terror (at worst) and benign, non-consensual surveillance (at best). That is true with every weapon, every tool, every technology, and always has been. It is important, however, that the mere raising of the issue of "privacy versus security" be met with the truth: such has always been the case, ever since humans decided to live with one another in tribes, small towns, civilizations of all kind, and it always will be. The challenge, of course, is to ensure that this balance works as well as it can for the benefit of the greatest numbers of people as possible. But to decide that such technologies are not worth pursuing because there is a chance that the free will of people will be over-ridden by them is to decide not to develop some of the most potentially positive life-changing tools to appear on the horizon in a very long time.

Benefits and Potential of Ubiquitous Computing



One of the clear benefits of ubiquitous computing, despite some of the arguments against it, is the fact that it blends into the background. Compare a sensor in a clock that otherwise appears normal with, say, an enormous machine designed to pick up and broadcast signals across a distance to a caregiver. The mere presence of that machine tells the individual all the time that s/he is vulnerable and being watched, whereas the clock looks like a clock; that is, the individual can forget about being monitored by her son or daughter and get on with his/her life. Back in 1991, Mark Weiser, the creator of the term "ubiquitous computing," had this to say: "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it" (Weiser 1991, as cited in Hargraves 2007, p. 4). This profundity - this ability to blend into daily life until it is forgotten - is perhaps the greatest gift of such technology, as it acts as an assistant instead of a nagging reminder. Weiser further argues that it is only the fact that computers are newer technologies - as opposed to the written word, for example, which is ubiquitous without argument - that makes them feel "arcane" and thus set apart (1991, as cited in Hargraves 2007, p. 4). In other words, computers are still new, especially to elderly people, but they continue to feel less and less so, and once we, as a society, can make the jump to accepting their presence in our lives, they will feel as "regular" as the written word which surrounds us.

Once in place, ubiquitous computing has the opportunity to revolutionize what "independence" means, especially to those who are vulnerable and need a higher degree of care. Orwat, Graefe, and Faulwasser (2008) note that, "Some of its capabilities, such as remote, automated patient monitoring and diagnosis, may make pervasive computing a tool advancing the shift towards home care, and may enhance patient self-care and independent living" (p. 2). In other words, people who might otherwise be forced to remain in rehabilitation facilities, nursing homes, or other places of assisted living, might instead be able to return to their own homes because they will have the protection of ubiquitous tools surrounding them, monitoring them, keeping them safe.

The notion of ambient intelligence takes pervasive computing a step further and takes the field into a future previously only imagined in science fiction novels. Yet it is critical to remember that such steps are achievable within our lifetime, and those who take these steps stand to benefit greatly for having the courage to take them. Imagine an environment that is "aware of the user's context" which provides information to the user when s/he needs it, in the amount needed, no more, no less; an environment that provides said information in an effortless, simple, invisible fashion; and an environment that learns from, and adapts to, the user such that it can be of greater use in the future (Rodríguez, Favelaa, Preciado & Vizcaín 2005). Imagine, then, living in a place where not only your weight, blood pressure, heart rate, and other physiological markers are monitored, but where virtual reality allows you to wake up in a simulation of an ocean-side apartment, or visit a museum where a lighted pathway appears, just for you, guiding you to the paintings and exhibits you most want to see (Tucker 2006). This is the promise of ambient intelligence, the next generation of pervasive computing. Futurists predict we will live in an "AmI" world by 2020 (Tucker 2006); those who take steps now to be at the forefront of this technology only stand to benefit.

Limitations to Ubiquitous Computing



Clearly, we are still in 2011, and the world of 2020 - the world in which we all get to wake up by the sea - is far away. This is the primary limitation of ubiquitous computing: that ideas and uses run far ahead of existing technology, making it seem almost impossible to achieve the hoped-for ideal of perfect awareness and absolute protection. It is also the case that even when such devices and systems have been developed, not all people will have access to them. Technology is expensive when first developed, and while costs come down rapidly once sales volume increases, it still takes time; and even time isn't enough to overcome the poverty in many parts of the world. This, then, is a second limitation of pervasive computing: the reality that not everyone will be able to afford it, especially when so many of its health-care applications are clearly so beneficial. A final limitation is expressed in the old adage: "garbage in, garbage out." Such technology is only as good as those who create it, program it, and use it; and unfortunately, the world will always contain sub-par manufacturers, inventors, designers, and so forth.

Recommendations for Development and Marketing of the Improved Presence Clock



In light of the previous discussion, it is strongly recommended that UbiTools proceed with developing the presence clock such that its applications can be broadened, and, ideally, such that the device can ultimately interface with other pervasive computing tools to be developed in the future. Moreover, it is strongly recommended that such development be granted full resources, without "skimping;" to do so would compromise the quality of the end results, and savings now will thus definitely mean shortcomings later.

Marketing the presence clock should begin with a public information "blitz," starting with focus groups and research studies that show that users themselves - that is, for now, elderly people, but in the future people with physical and mental disabilities - welcome the device as a way to improve their lives. As van Hoof, Kort, Rutten, & Duijnstee (2011) found in their study of 18 elderly people living in communities that help them with an intensive list of needs, users by and large find ubiquitous computing devices and systems increase their sense of health and safety in their own homes. When people hear such things "from the horse's mouth," as it were, they tend to give them far more weight than when "some company" tells them the same thing. Once underway, marketing should continue to focus upon education, enhancement, and enlightenment as the "Three Es" of this "new era" of safety, health, and happiness. As Pimple (2011) said, "[w]hen presented properly, the benefits of pervasive IT are obvious" (p. 30). UbiTools must ensure that pervasive computing devices and systems are always presented properly.

Conclusions

While it is beyond the scope of this report to present a full marketing campaign for the presence clock, rest assured that such a campaign could easily be created. The technology to create a world in which people are surrounded by pervasive computing devices and systems already exists. Ambient intelligence will pervade the environments of the future regardless of what anyone might wish to the contrary. Moreover, this technology is sound, and it is ethical, allowing us the means to protect and help far more people than ever before. Given all of these things, I urge UbiTools to become the name associated with this technology; if we don't, someone else surely will - and they will be the ones to reap both social and financial benefits from their efforts.

REFERENCES

Bharucha, AJ, London, AJ, Barnard, D, Wactler, H, Dew, MA, & Reynolds, CF, 2006, 'Ethical considerations in the conduct of electronic surveillance research', Journal of Law, Medicine and Ethics, vol. 34, no. 3, pp. 611-619.

Favela, J, Tentori, M, & Gonzalez, VM, 2010, 'Ecological validity and pervasiveness in the evaluation of ubiquitous computing technologies for health care', International Journal of Human- Computer Interaction, vol. 26, no. 5, pp. 414-444.

Hargraves, I, 2007, 'Ubicomp: fifteen years on', Knowledge, Technology, and Policy, vol. 20, pp. 3-10.

Haux, R, Howe, J, Marschollek, M, Plischke, M, & Wolf, KH, '2006, Health-enabling technologies for pervasive health care: on services and ICT architecture paradigms', Informatics for Health and Social Care, vol. 33, no. 2, pp. 77-89.

Kim, TS, Lee, H, Park, J, Lee, CH, Lee, YM, Choi, CS, Hwang, SG, Kim, D, & Min, CH, 2008, 'Ubiquitous evolvable hardware system for heart disease diagnosis applications', International Journal of Electronics, vol. 95, no. 7, pp. 637-651.

Orwat, C, Graefe, A, & Faulwasser, T, 2008, 'Toward pervasive computing in health care - a literature review', BMC Medical Infomatics and Decision Making, vol. 8, no. 26, pp. 118.

Pimple, KD 2011, 'Surrounded by machines: a chilling scenario portends a possible future', Communications of the ACM, vol. 54, no. 3, pp. 29-31.

Rashvand, HF, Salcedo, VT, Sanchez, EM, & Iliescu, D, 2008, 'Ubiquitous wireless telemedicine', IET Communications, vol. 2, no. 2, pp. 237-254.

Rodriguez, MD, Favela, J, Preciado, A, & Vizcaino, A, 2005, 'Agent-based ambient intelligence for healthcare', AI Communications, vol. 18, no. 3, pp. 201-216.

Scheffler, M, & Hirt, E, 2005, 'Wearable devices for telemedicine', Journal of Telemedicine and Telecare, vol. 11, suppl. 1, pp. S1-S14.

Stanley, KG & Osgood, ND, 2011, 'The potential of sensor-based monitoring as a tool for health care, health promotion, and research', Annals of Family Medicine, vol. 9, no. 4, pp. 296-298.

Tucker, P, 2006, 'At home with ambient intelligence', Futurist, vol. 40, no. 2, pp. 68, 66. van Hoof, J, Kort, HSM, Rutten, PGS, Duijnstee, MSH, 2011, 'Ageing in place with the use of ambient intelligence technology: perspectives of older users', International Journal of Medical Infomatics, vol. 80, no. 5, pp. 310-331.

Winter, JS, 2008, 'Emerging policy problems related to ubiquitous computing: negotiating stakeholders' visions of the future', Knowledge, Technology, and Policy, vol. 21, pp. 191 203.
LawStudy   
Sep 23, 2019

Pittsburgh Technical Institute (PTI) is a two year Middle States Accredited career college that has its main branch location in Oakdale, Pennsylvania and a satellite campus in downtown Pittsburgh. This particular cultural audit will focus on the main campus Oakdale location. Presently, the organization offers training for students to land careers in technology, business, criminal justice, design and building technology (PTI). The school has recently earned middle states accreditation, is continuously expanding its curriculum offerings and the enrollment is at a sound level. While these dimensions would suggest that the organization is achieving its goals, an internal audit reveals many problems that are affecting the efficacy of the organization. In this regard, the student satisfaction level is mediocre and the staff are not united in a common cultural direction. Staff turnover is quite high and staff members generally operate in constant fear of losing their jobs. In addition, though the school seeks to attract teachers with high education that would make them qualified for a regular university position, these people generally do not last long as they feel the curriculum offerings at the organization are substantially lower than what they are used to at a four year university. Their insights into these deficiencies are generally viewed by senior management as them not being a team player.

Dominant Values, Beliefs and Assumptions



Audit of OrganizationThe school considers itself a student centered organization. Students are considered to be customers and their satisfaction is put at the forefront of the decision making process. The school has what they consider to be an open door policy. Whereas other schools like their competitors at The Art Institute (AI) only take the top 20% of graduating seniors or people that pass an equivalency exam, PTI accepts anyone who has a high school diploma or equivalent and the money or financial aid to attend the school. Students are generally walked through every step of their education process to the extent that absences from school are approved if the student calls in sick to their department's student coordinator. In addition to the organizational focus on the student as a customer, the organization also places a high value on many layers of tiered management and staff conformity. Employees who question decisions or offer insights and alternatives to senior management decisions do not generally last long in the organization. They either quit or are terminated at the end of the semester. Decisions are made from the top down with very few decisions being made by those closest to the students. There is a focus on classic methodologies and bureaucratic structures rather than a results oriented atmosphere with staff freedom. Based on Acona et al's assessment of modern organizations, the values, beliefs and assumptions held by PTI are consistent with the Old Organizational Paradigm that was popular in the industrial age but is generally acknowledged to be a weak model for a Twenty First Century Organization. Despite the challenges, PTI does graduate many students and has consistent placement in certain industries. In addition, there is no shortage of new staff to come in to replace the high turnover rates as they offer good benefit packages and competitive salaries for staff members.

Organizational Cultural Artifacts



The dominant values of the organization can be seen on a number of levels. In the first regard, the strict focus on formality and bureaucracy are reflected in the staff dress codes. Aside from Friday's which are business casual days, staff are expected to wear formal business attire. For men, this includes shirts and ties and for senior management this often includes full suits or sport coats. Woman also have a clear formal business attire code so essentially, at any time it is clear who the staff are at the organization. Rather than a typical feel of a university or typical college, there is a business oriented feel throughout all transactions. In addition, the student centered environment is seen by teachers not having their own specified personal offices but open departmental areas for all staff members and bulletin boards with student success stories and other PTI related success stories. Faculty meetings and training also reflect the formality of the institution. Meetings are led by either department heads, senior management or the human resource department. Meetings are not open forums for discussion, instead, they are information dissemination exercises for the managerial hierarchy. Staff are sometimes asked to participate in discourse but these are still management led controlled discourses. Typically, dissension related to opinions is frowned upon and information being presented is done with the intent of staff to follow rather than to get their reaction. Participative management is not emphasized and is seldom used in favor of rigid authoritative structures.

The culture of the organization is emphasized from the onset of getting hired. All staff members are required to participate in a two day orientation where the entire handbook is taught to the new team members. During this time, new staff are allowed to ask question and gradually get familiarized with the organizational expectations. This perspective, however, is fully management oriented and does not focus on staff to staff cultural interaction. At the onset of joining the team, the new staff have no exposure to the actual morale or true attitudes of the general staff members. As will be further examined in the organizational culture effectiveness portion of the discourse, the culture perpetuated by the senior management is not embraced by the general staff and members follow protocols based on fear of termination. The overall morale of the organization is quite low. The management consistently mistakes conformity and following orders with a smooth running organization. They access staff compliance as being demonstrative of an efficacious environment for their customers and a unified work direction. In reality, neither are actually true. With staff members fearing candor, the organization has no successful tools for accurate self evaluation. Rules and new directions are based on theoretical ideas rather than what is actually occurring on the staff level operation.

Holistically, the organization is trying to elicit a formal, customer focused learning environment where the best decisions are regarded as being those by the top down management communication flows. The emphasis is on protocols and following instructions rather than a results oriented atmosphere. While what the organization wishes to be on the cultural level is what is seen by those first entering, by those who are only viewing on a cursory glance and by the outside world, a cultural audit of anyone on the staff or customer level of the organization quickly becomes aware of the strain between what values are expressed by the organization and how those values manifest on the staff levels. There is an obvious tension between the two structures. This fact alone means that there are organizational culture problems present within PTI that are keeping it from being the best that it can be.

Evaluation of Organizational Culture Effectiveness



The organizational culture effectiveness is compliance by fear based and therefore does not lead to voluntary group cohesion. Staff members around the water cooler typically referred to Friday as Black Friday. They have noted that Friday is often the day in which staff members are fired. A common conversation amongst staff is concern over who will be fired next. Those members who question decisions, even poor decisions, by management or that are more vocal than normal during staff meetings are typically acknowledged as being in danger of losing their position on a Friday. Rather than coming together to support one another, this environment has led most people who have been in the organization for a long time to keep to themselves and stay below the radar as one staff member explained during the audit. Most staff, even though they are committed to helping students to the best of their ability, are not loyal to the organization. When many advanced degree hires get job offers at more traditional universities or colleges they typically leave with very little notice. Some of the standards by which staff are evaluated are generally considered to be unfair and few expect to get raises or promotion based on merit. There is also a noted issue related to student and staff conflict. Staff members generally have observed a trend of management taking the side of the customer over that of the staff or teacher. As a result, rather than one cohesive unit, several subcultures are present that all operate in their own self interests. These competing paradigms can be classified as: the student group, the manager group, the staff group and the teacher group. Only the management considers the mission unified amongst the cultural groups, the cultural audit reveals different however.

The way in which the various cultural attributes manifest into subcultures is holistically problematic for achieving the aims of the organization. It is clear that a unified culture that is more appropriate for the Twenty First Century would better serve the students. Though the organization is profitable and successful, it is obvious that it could be more so if the staff and management were unified and more collaborative team oriented leadership approaches were embraced. If the organization wishes to be regarded on the same level as traditional universities or colleges and respected by its staff members, a major cultural overhaul has to occur form the top down. The culture of fear that is present amongst the staff is quite damaging to the overall direction of the organization.

References

Acona, et al. (2005). Managing for the Future. Canada: Thomson.

Pittsburgh Technical Institutes. Official Website.
LawStudy   
Sep 21, 2019

Contracts have been used for thousands of years to make formal agreements between two or more parties. Increasing, parties have been including Alternative Dispute Resolution (ADR) clauses in contracts in case any kind of dispute should arise between them. Alternative Dispute Resolution is any informal method of addressing and resolving disputes other than by litigation. Contract disputes between parties can eat up valuable time, energy and money for everyone involved. Including an ADR clause in a contract can help disputing parties work through their issues without involving the courts, usually for less money and in a shorter period of time. It also allows the disputing parties more direct participation, rather than being run by lawyers and judges. While ADR is oftentimes quite useful and a welcome alternative to litigation, it is not appropriate for every dispute. There are also several different types of ADR, so the parties need to ensure they include the type that is the best fit for their specific contract and situation. Different types of ADR typically include neutral evaluation, negotiation, mediation, conciliation and arbitration. The two most commonly used types of ADR are mediation and arbitration (Farlex).

Contract LawMediation is an informal alternative that involves the help of a go-between third party, called a mediator, whose job is to help the parties reach some sort of mutual agreement. They cannot force the parties to agree and are not permitted to decide the outcome of a dispute. Therefore, during mediation, both parties are able to retain a significant amount of control over the course of mediation and construct their own agreement. It is the mediator's job to determine the parties' interests and to help them explore practical, legal solutions that they can both agree on, though either party can end the negotiations at any time. Mediation is completely confidential and most parties will be asked to sign a confidentiality agreement so that the mediator cannot be called to testify about what was discussed during the mediation process. Agreements are usually non-binding, so the parties may still pursue litigation following the mediation process. When mediation is successful, it reduces the likelihood of additional court involvement and usually leaves both parties better satisfied than when they began. Mediation can be especially useful when the parties have a relationship that they would like to preserve, such as family members or business partners. It is not particularly helpful if one of the parties is not willing to compromise or if one of the parties has a significant advantage of power over the other party (California, 2017).

Arbitration has many similar aspects to mediation, but is more like a mini trial than anything else. Both parties agree on an arbitrator, who is often a professional in the parties' subject of dispute. If the parties cannot agree on a single arbitrator, each may choose one themselves and the two arbitrators would then select a third to complete the panel. An arbitrator acts similar to a mediator, though they are not a go-between facilitator. The parties are also be able to choose the applicable state law and venue themselves to ensure neutrality. Once the parties have agreed on and selected the above factors, the "trial" can begin. Both parties' arguments will be heard, with limited discovery and simplified rules of evidence. Arbitration hearings are quite longer than mediation hearings, usually lasting between a few days to a week long. The arbitrator(s) will deliberate and issue a written decision, or arbitral award. Unlike in mediation, neither party can withdraw from arbitration once it begins and, if the arbitration is binding, the decision is the final resolution and enforceable under both state and federal law. The decision is not public record, making it more private than going to court. Arbitration is mostly used to settle disputes in labor, construction and securities regulation (Staff, 2007). It is best for cases where the parties would like another person to decide the outcome of the dispute without the time or expense that litigation brings.

Conciliation is another type of ADR and has similar qualities to both arbitration and mediation with several distinct differences. In conciliation, both parties select an independent third party who will hear both sides of the dispute either privately or together. The conciliator then prepares a compromise that is fair to all of the parties. The decision is not binding or enforceable unless both parties agree to it. The main difference between conciliation and mediation is that the conciliator is the authority figure who is responsible for determining the best course of action and solution for the parties. They often propose the terms of settlement, not the parties. Conciliation is sometimes used preventively, as soon as a dispute emerges, to help prevent a serious conflict from developing. Mediation or arbitration is usually the next step if conciliation fails.

Whether assisted or unassisted, negotiation is the most basic means of settling differences and is commonly used by most people in everyday life. When an issue between two or more parties arises, negotiation is usually the first method of problem solving attempted. Negotiation is generally characterized as resolving the dispute by cooperation instead of competition. Parties may hire someone, such as an attorney, to help them negotiate for them, or the parties could deal directly with each other. There are no specific procedures to follow and can be done anywhere from a board room to a living room. Negotiations are not binding unless both parties agree and draw up a contract (Canada & Communications).

Neutral evaluation is the least formal type of ADR after negotiation. Each party presents their case to a neutral person called an evaluator, who is often an expert on the topic. The evaluator gives the parties his or her opinion on each party's strengths and weaknesses and about how the dispute could be resolved. The opinion is not binding, but simply suggests to the parties how a resolution could be reached. This type of ADR is most appropriate in cases where an expert opinion is needed to advise on technical issues (California).

As with most things, there are advantages and disadvantages to using an ADR clause in a contract. When an ADR clause is included in a contract, the parties are agreeing to surrender their rights to have their disputes heard by the courts. Many states have found that ADR clauses that waive a party's right to bring the suit to court are enforceable, though this is not the case in all situations. Because of this, both parties should always think about the pros and cons of including an ADR clause in the contract.

The biggest pro of including an ADR clause is the time and money that should be saved by both parties. Court costs, attorney fees and years of fighting in a courtroom is very daunting to most people, but can be negated with an ADR clause. This is especially helpful for clients who may not be able to afford the costs associated with full-blown litigation. Another pro is that if a dispute does arise, the parties are able to work with an impartial third party to help them through their dispute without putting any additional stress on the parties' relationship. ADR allows both parties to work together on a compromise, which generally leads to less escalation and ill will between the parties, which oftentimes prevents hostility towards each other. Both parties should be satisfied with the decision, since the compromise prevents there from being a winner and a loser. They may even be able to work together in the future.

ADR is especially helpful when the contract deals with complex or unfamiliar subject matter. The parties are able to choose a third party expert that has the expertise that is needed to help everyone understand the issues at hand and how to best handle them. Because of their knowledge about the topic, they are often able to make a better informed decision than if it was left up to the courts. The parties are also able to select all of the conditions of the ADR clause themselves. They can determine time limits, rules about evidence, and if any appeal rights are available, in addition to other procedures. Finally, the outcome of the dispute is kept confidential. This allows both parties to keep their contract and business private, a luxury that litigation does not allow (Simpson, 2012).

Of course, as with any alternative solution, there are drawbacks in addition to the positives. The first drawback is the cost of the ADR. While ADR can save both parties a tremendous amount of money by avoiding litigation, the process is not free. The parties must hire a third party representative, which can sometimes cost upwards of tens of thousands of dollars, if not more. Secondly, the outcome may not be what is expected. During ADR, many parties waive their rights to object to evidence that might not be admissible in a court of law. Questionable evidence and things such as hearsay may be included in ADR programs, however. When a decision is made by the courts, there are very strict rules and laws that have to be followed. This makes the court's ruling more uniform and predictable than the decision of the mediator/arbitrator. The lack of transparency may also make the proceedings more biased because it is not held in an open courtroom for all to see. More influential and/or wealthy clients may have the upper hand.

Another issue is that when a contract is drawn up, the parties may not know what type of ADR should be included if a problem does arise. Because of this, the wrong type of ADR might have been selected and the rules may not be the best fit for the parties. Both parties should research the different types of ADR so they can be better informed about which type to choose. If one of the parties does not agree with the outcome, they usually have a limited amount of recourse. ADR often prevents a new rehearing of the issues, even if one party feels they were treated unfairly. If one of the parties feels that they would be sacrificing too many rights and protections that they would receive via litigation then ADR would not be the appropriate method to use. Finally, if mediation or non-binding arbitration fail, the parties will ultimately have to go to court anyway. They will have then increased the amount of time and money they have spent on the issue (Simpson).

When two (or more) parties choose to enter into a contract together, an ADR clause is something that everyone should consider. No one ever thinks that they are actually going to use the ADR clause in a contract. Unfortunately, this is not always the case. Disputes and disagreements between parties happen more than people think they do. An ADR clause might help the parties avoid costly and time consuming litigation. However, the parties do need to consider what type of ADR should be used in their contract, if they determine that one is necessary. The type of ADR chosen should work well with the terms and structure of the contract. In addition to the types of ADR mentioned above, there are also countless other methods, many of which modify or combine other types of ADR. Despite the number of ADR types, all methods have the same goal in mind: for the parties to find the most effective way of resolving their dispute without resorting to litigation. While ADR does not always solve the problem, it should definitely be attempted before the parties take their dispute to court.

References

Simpson, B. (2012, July 25). Alternative dispute resolution clauses in contracts: Not just boilerplate.

Staff. (2007, August 6). Alternative dispute resolution.

California, J. C. of. (2017). ADR types & benefits.

Farlex (2003). The Free Dictionary.

Canada, G. of, & Communications, E.
LawStudy   
Sep 18, 2019

Introduction

In Australia the assumption is made that employees will defer to the interest of the employer. This is exhibited in an allocation of intellectual property rights. These rights have the possibility to arise through many different types of activities associated with employment. This legal right allows the owner of the intellectual property to exploit or use the ideas in any legal way they see fit. There are different types of intellectual property, including copyrights, patents, designs, and trademarks.

Australian LawsThe copyright prevents unauthorized distribution or copying of ideas, which are represented in the form of broadcasts, video recordings, sound recordings, music, photographs, drawings, software, computer files, papers, or books. Patents applied to processes or products, which are unique1. Designs refer to specific types of visual presentations, which are used in association with commercial products. Trademarks can include labels, logos, or names. They indicate that a service or some goods have originated from a specific business or individual. The only one of these intellectual property rights which comes about automatically in Australia is the copyright. The other types of intellectual property rights must be obtained through formally registering the trademark, design, or invention.

In the case of a copyright or design any work, which is created as part of employment automatically belongs to the employer1. This is specified in the Copyright Act 1968 s 35 (3), (6) and the Designs Act 2003 s 13 (1) (b). However, these laws are subject to agreements between employers and employees. For example, a number of universities will allow academicians to retain the copyright on scholarly work they produce.

Patents have a similar rule. However, it is not clearly stated in the Patents Act 19901. Instead, it is implied by the common law that an invention which is produced during employment is only able to be patented by the employer. An important case which establishes this legal right is Sterling Engineering v Patchett (1955). An employee who chose to patent their invention would be required to assign the patent to the employer, unless some other agreement had previously been made.

Australian Universities



Universities in Australia must abide by the same laws as other institutions regarding intellectual property rights of their employees. The University is legally able to claim the rights to any inventions, which are created by the employees, including academic staffs, which are developed as part of their employment at the institution. This is established by the common law in Australia as well as nearly all the Australian University intellectual property statutes and policies. Many of the statutes and policies also allow the University to have ownership of any invention, which is developed while using the resources of the University. This can include equipment, laboratories, and any other assets, which were necessary to develop the invention and are owned by the University. A university is also allowed to establish ownership of any invention, which is developed through publicly funded research.

The academic employees of a university in Australia have no rights according to the common law with regard to inventions, which are created during their employment. This is of course not the case if there is a pre-arranged agreement that the academic is allowed to have part or full ownership. This is the case in at least two universities in Australia.

It is generally the case that students do have a right to claim ownership to any inventions made while they are studying at the University. Again, this can be modified if there is a pre-existing arrangement. Several universities in Australia have policies and statutes regarding intellectual property, which allows them at least partial rights to inventions if the student was using University resources to create the invention.

Determining the right to ownership of patents, which are established by staff at a University can be difficult and is often determined by two general issues. The first issue is the type of employment relationship the staff member has with the University. The second issue is the terms of that employment relationship. Of particular concern in this regard is whether the invention was discovered during the normal course of employment.

With regard to the issue of the employment relationship there must be a distinction between independent contractors and employees of the University. An academic who is paid a salary by the University and works regularly would generally be considered an employee. Another individual that lectures on an irregular basis and invoices the University for their time would usually be considered an independent contractor. This would even be an even clearer case if the irregular lecturer invoices the University through a private consulting company. Unless previously agreed otherwise, the University would generally not have any right to the independent contractor's invention.

With regard to the terms of the employment relationship is important to know if there is a previous agreement. It is also important to know the type of duties for which the academic is serving the University. Some academics teach and do research, while others only perform one of these functions. Other academics may be involved in administration only. The University would have common law right to the discoveries of an academic doing research if something was discovered during this work. However, the case may not be as clear if the academic only serves an administrative function for the University and made a discovery without the use of the university's resources.

University of Western Australia v Gray



There has been a lengthy dispute between Professor Bruce Gray and the University of Western Australia regarding the professor's inventions3, see University of Western Australia v Gray (2008).

This is considered by some to be a landmark case regarding intellectual property rights within Australian universities. Gray was a full professor of surgery employed by the University of Western Australia. He signed an employment contract when he began working for the University. This contract included an agreement that he would teach and do examinations according to the statutes of the University and Senate. He would also conduct research and encourage this activity among staff and students. He was also employed to perform other works, which would be required by the Senate. Furthermore, the University has intellectual property regulations and rules regarding patents, which are applied to all academics employed by the institution.

Professor Gray did a great deal of research involving bowel and metastic liver cancers. While doing this research, he successfully developed technologies, which allowed the use of microspheres that provide a way to treat the tumors in a targeted fashion. This resulted in several patents, which were filed in a number of different names, including Gray's. During 1997 the professor discontinued his full-time work at the University. He assigned a number of property rights to a company called Sirtex Medical Limited. This company was established in order to market and commercialize the microsphere technology which had been developed by Gray. The company did an initial public offering in 2000, and the professor became a director of the company with a large number of shares.

The University claimed they had the sole rights to the inventions of Gray because they were developed as part of his employment3. The courts ruled in favor of the professor, and the University's final appeal was rejected in February of 2010, see University of Western Australia v Gray [2010] HCA Trans 11. This case is seen as showing the importance of monitoring contracts, which are used for employees and independent contractors. A universal declaration of the employer that they have intellectual property rights is not necessarily sufficient to establish a legal right. This is especially true regarding implied terms. T rights must be secured by expressly stated conditions, which are agreed upon by both parties prior to the employment.

Government Research Organizations



Government research organizations also encounter instances when employees make significant discoveries. Examples of these types of organizations in Australia include the Defense Science and Technology Organization and the Commonwealth Scientific and Industrial Research Organization. Both groups receive public funds from the government in order to do their research. There is no uniform legislative or executive policy regarding the ownership of intellectual property for these organizations. Therefore, institutional policies and common law principles must be applied in order to determine property rights. Another important factor to consider is the level of commercialization for the invention.

According to the common law principles, the publicly funded government institution of research has the right of ownership to inventions, which are created by employees when the discoveries are made during their employment2. The same difficulties which apply to universities, also apply to these organizations. It is important to establish the type of relationship and the terms of employment. A number of government research institutions in Australia have developed intellectual property policies, which cover these situations.

The majority of government research organizations in Australia have formal intellectual property policies, which apply to all staff. They also generally have a section in the employment contracts which cover intellectual property rights regarding all employees. Most of these organizations have internal documents, which deal with the right of ownership to intellectual property and general business guidelines. These measures usually allow the research organization to have full ownership of any discoveries and inventions, which are developed by employees as part of their employment in the organization. This also includes activities, which are not done during normal working hours if they are clearly related to the employee's official duties. In other words, if a researcher extends their hours in the laboratory and makes a discovery after their usual time of work, but it is directly related to the work they are normally doing, the organization can still claim ownership of this discovery.

Business and the Strategic Use of Intellectual Property



In regard to the intellectual property rights for inventions developed by employees, businesses in Australia have the same implied rights as a University or government research organization4. If the discovery is made as a part of the employee's job duties, then the right of ownership can be claimed by the employer. In the 21st century, the majority of businesses have more capital in their intangible goods than actual physical properties. The rights to these assets are secured with brands, trademarks, copyrights, and patents. However, this protection of assets comes at a cost.

The process of obtaining a patent or trademark can be a time-consuming and expensive process. Another cost which many businesses are now considering is the intangible factor of employee drive. In other words, if employees are not provided with sufficient incentives for making new discoveries, they will often stop expending energy toward this goal. While it is important for a business to protect its intellectual property assets, there must also be room for employees to benefit from their discoveries. Businesses can accomplish this task in a number of different ways. For example, employees who make discoveries on more effective methods of accomplishing a task can be given a rise in pay or assigned a more prestigious position.

Businesses which are based upon research may wish to make a more direct Association between the development of novel ideas and monetary gain. In this case, the company may wish to allow the discoverer of a new product or process to have a percentage share in the increased profits which are a direct result of this discovery. This is not a new strategy. Chief executive officers of major corporations have been receiving benefits packages, including shares of the company for many years. This means that a chief executive officer who develops new corporate strategies and methods of accomplishing a corporation's tasks more effectively will be compensated automatically by increased share prices.

Company Directors and Intellectual Property



It is generally the case that discoveries made by an employee of the company while performing their duties are the property of the employer5. It is also true that, unless there has been an arrangement made to the contrary, an independent contractor's discovery is the property of the individual. A third category of relationship with an organization is provided by a company director. Who is the owner of the intellectual property provided by discoveries made from a company director in the performance of their duties? This was recently decided in an Australian court of appeals in the case of Eastland Technology Australia Pty Ltd v Whisson [2005} WASCA 144. It was ruled that a company director must be treated like an independent contractor with regard to intellectual property rights.

Conclusion

In Australia any type of discovery made by an employee during the completion of their job duties is the property of the employer1. This is implicit in the employment relationship according to the common law. The legal right of ownership belongs to the employer. However, this is only automatic in the case of copyrights. The rule regarding patents is similar but there must be a reassignment done to the employer. Both universities and government research organizations which are publicly funded frequently have issues arise related to the rights regarding intellectual property of new discoveries.

There are two basic factors to be considered regarding the rights to intellectual property in these cases. The first is the nature of the employment relationship. The second is with regard to the terms of the employment. If the individual making the discovery is an employee of the organization and makes the discovery during the performance of their duties, then the property right belongs to the organization. If the individual is an independent contractor, they retain the right of ownership. This is also true for a company director5. However, as the case of University of Western Australia v Gray [2010] HCA Trans 11 made clear it is prudent to ensure that all ownership rights with regard to intellectual property are agreed upon in writing prior to employment. While it can be considered as implicit as part of the employment contract that discoveries will belong to the employer, this may not always be enforceable.

REFERENCES

Stewart, Andrew. Stewart's guide to employment law. Annandale, NSW, Australia: Federation Press, 2008.

Christie, Andrew F., Stuart D'Aloisio, Katerina L. Gaita, Melanie J. Howlett, and Elizabeth M. Webster. "Analysis of the legal framework for patent ownership in publicly funded research institutions." Commonwealth of Australia, Department of Education, Science & Training.

Still, Mary. "Employers, employees, and intellectual property: The saga of University of Western Australia v Gray." Clayton UTZ.

Hunter, Laurie. Intellectual capital: Accumulation and appropriation, Melbourne Institute working paper No. 22/02. Melbourne, Australia: Melbourne Institute of Applied Economic and Social Research, The University of Melbourne, 2002

Knight, Peter. "Who owns the IP devised by a company director?." Clayton UTZ. 14 Dec. 2005.
LawStudy   
May 09, 2019

Introduction

Within the context of the post-Enron age, intense scrutiny of multinational corporations (MNCs) has been sourced from visible, unethical behavior in conjunction with the mounting power wielded by businesses within the global marketplace. Utilitarian ethics has a wide spectrum of implications for the business environment, with capitalist markets particularly and inextricably bound to utilitarian perspectives (Cleveland, 2000). Price-fixing represents a heatedly debated issue within capitalist markets, with recent legal changes reflecting a level of unprecedented tolerance for the agreement between two or more firms on the price at which a product or service should be consistently offered to consumers (Jones and Turner, 2010). Economic behavior represents a critical aspect of human nature. From a utilitarian perspective, the economic behaviors of individuals should be undergirded with the assumption that all individuals share a mutual interest in such behaviors. More specifically, all human beings have an interest in garnering the highest return for their resources. If price-fixing was ethical according to a utilitarian framework, then the practice would need to lead, more often than not, to most economic players garnering the highest possible return on their resources. This inquiry argues, however, that this is irrefutably not the case, with price-fixing generally not yielding the greatest benefit for the greatest number of individuals. Moreover, price-fixing can easily be associated with alternative manifestations of unethical outcomes.

Utilitarian Ethics and the Global Business Environment



Price FixingA capitalist economy is supported by individuals who share a common goal; that is to fulfill their own self-interest. Cleveland asserts that "on this foundation, economists have made significant headway in explaining not only much of what takes place in trading relationships, but also in developing the model of supply and demand as an extremely useful tool for predicting the outcomes of various changes in important variables" (p. 87). Utilitarianism, originally formalized by John Stuart Mill and Jeremy Bentham during the 1800s, assumes ethical behavior yields the greatest good for the greatest number of people. From an economic perspective, the greatest good is linked to monetary gain, with corporations representing large-scale mechanisms for achieving this gain. The context of the global marketplace and emerging trends therein, however, has raised questions relatively to utilitarianism and what outcomes truly constitute the greater good.

The same forces which have prompted increase ethical scrutiny of firms' accounting practices, namely glaring instances of ethical violations and the exponentially increasing power of the corporation, have served to shape the corporate social responsibility (CSR) movement. The CSR movement is grounded in the notion that competitive advantage must be sustainable, with short-term financial gains insufficient in promoting the needs of multiple stakeholder groups. From a utilitarian perspective, by extension, it is not merely the financial dimension which must reflect the greater good for the greatest number of people; it is the social and environmental dimensions which must reflect these outcomes as well.

Price-Fixing: An Overview



Price-fixing represents an agreement regarding the price at which a product or service will be bought or purchased; this agreement must be between parties who are on the same market side in order for it to truly be reflective of price-fixing, with the goal being to control both supply and demand through the practice of fixing prices. From a utilitarian perspective, price fixing could be rationalized in that the ultimate goal is to drive prices to the highest possible level in order to yield the most profits. In turn, those within the price-fixing agreement mutually benefit from the resultant price stabilization. A wide range of additional practices are associated with price-fixing, including list pricing, discount limitations, and creating general market barriers to anything which would threaten the fixed price (Jones and Turner, 2010).

However, price-fixing is unethical according to utilitarian assumptions because while the practice may be ethical in theory, it is not ethical in practice. From a neo-classical and natural law perspective, price-fixing does not permit the necessary competition within the capitalist market; this, in turn, creates an operating environment controlled by a limited number of people. The greatest good for the greatest number of people is achieved only by permitting free market competition (Cleveland,2010). The legal environment regarding price fixing has shifted within the United States during recent years, with the Sherman Antitrust Act previously the only relevant legislation which would allow for those engaging in price-fixing to be prosecuted. During the late 1990s, state Supreme Courts began distinguishing between vertical and horizontal price fixing, with the former type of price-fixing practice referring to retail price-fixing; the United States Supreme Court cited that the Sherman Act was not violated through vertical price fixing, though horizontal price fixing was still prosecutable by law. Jones and Turner (2010) describe this decision as follows:

In mid-2007, the U.S. Supreme Court overturned a 1911 precedent prohibiting manufacturers from setting prices at the retail level. That earlier decision put resale price maintenance (vertical price fixing) into the category of "per se" violations of the Sherman Act. Per se violations cannot be justified by findings of benefits to competition. The 2007 Leegin decision moved such cases into the other category under the Sherman Act: the rule of reason. Under this category, proof of a violation is often very difficult because of the type of evidence required. Such cases rarely succeed, with the effect that manufacturers can now set retail prices (p. 89).

The authors argue that the Supreme Court decision was an unfortunate one that will lead to increasingly blurred lines between horizontal and vertical price-fixing, with monopolies inevitably emerging in the coming years, particularly within the retail market.

Conclusions

While the global marketplace is evolving to no longer define economic activities purely in terms of economic benefit but also in terms of social and environmental outcomes, it remains that price-fixing will ultimately continue to support a very limited number of stakeholders. Blurred ethical lines might emerge if price-fixing could be demonstrated in some instances to preserve environmental resources or protect vulnerable populations in some way. There is, however, no evidence that this is occurring thus far, with the literature suggesting that price-fixing actually harms consumers by limiting their choices and forcing them to pay higher prices (Jones and Turner, 2010). Jennings posits that price-fixing is essentially a red-flag behavior that is indicative of a corporation's potential for compromising its ethics in general and potentially driving itself out of business. From a utilitarian perspective, business practices must yield the greatest good for the greatest number of people, with price-fixing clearly doing more harm than good, benefiting only a small number of market players.

REFERENCES

Cleveland, P. A. (2000). Economic Behavior: An Inherent Problem with Utilitarianism. Journal of Private Enterprise, 16(1), 81-90.

Jennings, M. M. (2006, Summer). The 7 Signs of Ethical Collapse What Makes a Good Company Go Bad? Recognising and Remedying the Warning Signs of Ethical Collapse Can Help Prevent Accounting Scandals and Restore Market Trust. European Business Forum, (25), 32-45.

Jones, B. J., & Turner, J. R. (2010). The Fall of the per Se Vertical Price Fixing Rule. Journal of Legal, Ethical and Regulatory Issues, 13(2), 83-91.
LawStudy   
Aug 24, 2018

Justice and law, two intertwined fields in both society and morality, form the braided backbone of the interactions of individuals and build the foundation of standards by which individuals are held within a society. The ideals of justice are often closely related to morality, while law and the regulations presented there can be seen as extensions of government and therefore an extension of those who are charged with upholding those laws.

Criminology Study AdmissionsThe study of criminology is a broad field, which encompasses and understanding of the concepts of crimes and the mechanism behind the criminal mind, and applying those methods to protect the general public from the criminals who, unfortunately, have no regard for the laws and standards by which we all live. In this field, as an individual, I can use my passion for law and my concepts of morality to create a career based in justice, working for the CIA, the FBI or as an officer dedicated to keeping the population safe from such lawbreakers.

In this regard, I have geared my education toward a broad and comprehensive understanding of human nature and law, by majoring in History, Politics and Philosophy. Majoring in History has allowed me to gain an understanding of the achievements, failures and happenings that have lead to the current situations today - allowing for a specific understanding of the histories of law and lawmakers, how law has impacted the evolution of society today and how, in the face of the current political climate, the law and the agents which enforce it are more important than ever. The political major, as well, has given me a broad understanding of the political leanings and the international relations between societies and cultures and the differences and similarities that may spawn wars, acts of terrorism and crimes of hate and prejudice. Philosophy, of course, has an application in criminology as it allows for the ability to think outside the box and come to an understanding of the workings of a mind that may be foreign to one's own thoughts and ideals.

In addition to my education, however, I have also had work experience that will assist me in my career choice and continuing education. I have had five years experience as a paralegal, which has given me a great understanding and admiration for the judicial system and its regulations. I have been exposed to and worked with the judiciary system and admire the way our laws and their existence work to provide justice for those who seek it and defend those who require defense in a fair and public format.

I have also had the pleasure of managing personnel at a major oil company, which has greatly honed my leadership and interpersonal skills. Learning to effectively manage personnel is a difficult and never fully mastered skill as each individual has different requirements such as learning styles and personality types. Mastering the ability to successfully navigate between different individuals and help them to become the best employee they can be will be a skill I can bring into criminology, as dealing with and effectively communicating with different personality types will be an essential skill when trying to extract the truth from a convoluted situation.

Continuing education requires a dedicated mindset and a desire to take full advantage of the endowment that education gives to an individual's future. Not all who set out to achieve a master's degree will succeed - however, I know that I am one of the individuals with the drive and will to succeed. I sincerely want to be one who seeks to provide justice and keep the innocent from harm. In addition, my desire is to see those who seek to make a political statement with acts of terrorism stopped and to allow the citizens of our great country to live in peace without fear. I appreciate the time that you have taken in reading this essay, and hope that my desire to fulfill this position in our justice system is apparent to you, and that the commitment I have to my education is one that you will share.