Saturday, November 16, 2019
Change Management Plan to Reduce Medication Errors
Change Management Plan to Reduce Medication Errors Assignment 2 Change Management Plan: reducing medication errors by building a dual medication error reporting system with a ââ¬Ëno fault, no blameââ¬â¢ culture Introduction Medication errors in hospitals are found to be the most common health-threatening mistakes made in Australia (Victoria Quality Council, n.d.). Adverse events caused by medication errors can affect patient care, leading to increased mortality rates, lengthy hospital stays and higher health costs (Agency for Healthcare Research and Quality, 2012). Although it is absolutely impossible to eliminate all medication errors as human errors can occur, reporting errors is fundamental to error prevention. ââ¬Å"Ramifications of errors can provide critical information to inform the modification or creation of policies and procedures for averting similar errors from harming future patientsâ⬠(Hughes, 2008, p. 334). Thus, it highlights the importance of change management to provide a reporting system for effective error reporting. In this paper, the author is going to explore current incident-reporting systems and discuss the potential benefit of a dual medication-error reporting system, wit h a ââ¬Ëno fault, no blameââ¬â¢ culture through a literature review, followed by a clear rationale for the necessity of a change management plan to be in place. Lippittââ¬â¢s Seven Steps of Change theory will be demonstrated in detail with clear strategies suggested for assessing the plan outcomes. Finally, the main issues will be summarised with an insightful conclusion. Discussion Medicines are the most common treatment used in the Australian healthcare system, which can make great contributions in relieving symptoms and preventing or treating illness (Australian Commission on Safety and Quality in Health Care, 2010). However, because medicines are so prevalently used, incidences of errors associated with the use of medicine are also high (Aronson, 2009). Over 770,000 people are harmed or die each year in hospital due to adverse drug events, which can cost up to 5.6 million dollars per year per hospital. Medication errors account for one out of 854 inpatient deaths and it is notable that the number of medication error-related death is higher than motor vehicle accidents, breast cancers and AIDS mortality (Hughes, 2008). Reporting enables a platform for errors to be documented and analysed to evaluate causes and create strategies to improve safety. A qualitative study (Victoria Quality Council, n.d) was conducted to survey the current medication error reporting systems in both metropolitan and rural hospitals in Victoria. Most hospitals prefer the report to be named as it allows follow-up of the incidents, whereas only a small proportion of hospitals use anonymous reporting to alleviate the barrier of reporting yet the correlation with actual errors has been low. In addition, a majority of hospitals acknowledged that near misses are supposed to be recorded but are rarely documented (). It is clear that errors and near misses are key to improve safety, so they should be reported regardless of whether an error resulted in patient harm. A near-miss error that has the potential to cause a serious event does not negate the fact that it was and still is an error. Reporting near misses is invaluable to reveal hidden danger. Hughes (2008) pointed out that the majority believes a mandatory, non-confidential incident report system could lead to and encourage lawsuits thus a reduced frequency of error reports resulted. A voluntary and confidential reporting system is preferred, which encourages the reporting of near misses and generates accurate error reports. However there is concern that with voluntary reporting, the true frequency of both errors and near misses could be much higher than what is actually reported (White, 2011). Thus, it can be concluded that a dual system combining both, mandatory and voluntary mechanisms might improve reporting. Although nurses should not be blamed or punished for medication errors, they are accountable for own actions. Therefore, reporting errors should not attribute blamed individuals but to ââ¬Ëhold providers accountable for performanceâ⬠and ââ¬Å"provide information that leads to improved safetyâ⬠(Hughes, 2008). Individuals and organisations attention needs to be drawn toward improving the error reporting system, which means to ââ¬Ë focus on a bad system more than bad peopleââ¬â¢ (Wachter, 2009). Reporting of errors should be encouraged by creating a ââ¬Ëno fault no blameââ¬â¢ culture. Rationale: Medication errors can occur as a result of human mistakes or system errors. Every medication error can be associated with more than one error-producing condition, such as staff being busy, tired and engaging in mutule tasks (Cheragi, Manoocheri, Mohammadnejad Ehsani, 2013). Nurses are mostinvolvedat themedication administrationphase and are the last people involved in the drug delivery system. It becomes the nursesââ¬â¢ responsibility to double check prior to the administration of medication and to capture any potential drug error that might be made by the prescribing doctor or pharmacy. Whether the nurse is the source or an observer of a medication error, organisations rely on nurses as front-line staff to report medication errors (Hartnell, MacKinnon, Sketris, Fleming, 2012). When things go wrong, the most common initial reaction is to conceal the mistake. Not surprisingly, most errors are only reported when a patient is seriously harmed or when the error could not be easily covered up (Hughes, 2008). Reporting potentially harmful errors before harm is done, is as important as reporting the ones that harm patients. The barriers to error reporting can be attributed to the workplace culture of blame and punishment. Blaming someone does not change those contributing factors and a similar error is likely to reoccur. Adverse drug events caused by medication errors are costly, preventable and potentially avoidable (Australian Commission on Safety and Quality in Health Care, 2009). Thus, it is essential that interventions to be implemented must ensure a competent and safe medication delivery system. To do so, change is needed; to adopt a dual medication error reporting system with a ââ¬Ëno fault, no blameââ¬â¢ culture in Holmesglen Hospital. Change Management Plan: The Nursing role has evolved to match the ongoing growth of the Australian health-care delivery system. There is a trend for nurses to take responsibility for facilitating positive change in areas related to health (Steanncyk, Hancock Meadows, 2013). Nurses play the role of change agents which is vital for the effective provision of quality healthcare. There are many ways to implement changes in the work environment. Lippittââ¬â¢s Seven Steps of Change theory is one of the approaches believed to be more useful as it incorporates a detailed, step by step plan of how to generate change (Mitchell, 2013). There are seven phases in the theory: Phase 1: The Change management plan begins at this phase to provide a detailed diagnosis of what the problem is. No matter what reporting procedures are in place, they may capture only a fraction of actual errors (Montesi Lechi, 2009). Reporting medication errors remain dependent on the nursesââ¬â¢ decision making, and the nurses may be hesitant or avoidant to report errors due to fear of consequences. A combination of mandatory and voluntaryreport system is suggested with a ââ¬Ëno fault no blameââ¬â¢ approach to reduce cultural and psychological barrier (Hughes, 2008). Both statistical review and one to one informal interviews can help to identify areas that need attention and improvement. An open door policy and disclosure preferences for nurses who want to express their concerns, either to a nurse unit manager, a nurse in charge, a supervisor, a senior or a nurse representative or a colleague are all suitable. This approach can be effective in exploring and uncovering deep-seated emotions, motivations and attitudes when dealing with sensitive matters (). Statistical review, such as RiskMan reviews, is a useful tool to capture and classify medication errors (Riskman, 2011). Holmesglen hospital are conducting bi-monthly statistic reviews to gather information on the contributing factors of medication errors, by aiming to target system issues that could contribute to the error made by individuals, and make a change at organisational levels. For example, if medication errors are constantly caused by staff who are distracted or exhausted, staffing lev els and break times will be reviewed. Phase 2: At this stage, motivation and capacity to change are assessed. It involves small group activities such as staff meetings or medication in-services and all nursing staff are invited. Feedback can be given either directly (face to face) or in-directly (survey) and nursing staff knowledge, desire and skills necessary for the change as well as their attitude for change are assessed. Staff motivation can be reflected through rates of meeting attendance, number of submitted surveys, or number of staff who actively participated in the meeting discussion. Nurses who have good insight and are actively involved in the meeting are the ââ¬Ëdriving forcesââ¬â¢ which will facilitate the process of change management; nurses who are hesitant or adverse to change are the resisting forces, in which force-field analysis can be used to counter this resistance (Mitchell, 2013). Force-field analysis is a framework for problem solving. For example, with the health budget crisis we face today in Australia, many hospitals and units may have financial restrains and are incapable of maintaining the flow of the change process. In the meetings, financial issues can be brought up at organisational levels that making change is necessary for both better patient outcomes and reducing unnecessary healthcare costs. Phase 3: With the motivation and capacity levels addressed, determining who the change agent is and whether the change agent has the ability to make a change. Change agents can be any enthusiastic person who has great interest, has a genuine desire and commitment to see positive change. Daisy is a full time associated nurse unit manager (ANUM) employed by Holmesglen hospital for some years. As she has a background of being a pharmacist, part of her role includes providing drug advice to nurses. During her weekly medication review, Daisy noticed that medication errors have been frequently occurring but there is little correlation with the actual reports submitted. Daisy decided to run in-service sessions and all nurses are invited to attend. Daisy discussed her change management plan with the nurse unit manager who also expressed interest and agreed to provide human resources and reasonable financial support. Another four ANUM also expressed interest and commitment. It has been arrang ed that two ANUM to attend the in-service at each time. Phase 4: The in-service is designed to be running for 6 months from September 15th 2014 to March 15th 2015 on monthly basis. Daisy will be holding the in-service and other ANUM will provide assistance in implementing the change plan. The in-service will consist of two parts and run for two hours. The first hour will be a review of the performance of the last month along with relevant statistics. The second hour will be self-reflection and discussion. All participants will be paid for attendance and encouraged to complete an anonymous survey monthly. Phase 5: Daisy is the leader of the change agents responsible for conducting in-services, collating information regarding medication safety, and summarising data with the assistance of ANUM. Meanwhile, Daisy and all the ANUM are the senior staff responsible for providing supervision and support to junior staff and other nurses. A monthly summary report of performance is submitted to the leader for review and monthly meetings are held among senior groups to review the effectiveness of the change management plan and adjust and modify the current plan if needed. Phase 6: A communication folder will be used to update nurses about past meetings. A drop box is available in the staff room for anonymous suggestion and complaints, which can only be accessed by Daisy and the other 4 ANUM. All suggestions and complaints will be responded with two weeks of submission in written form and available in the staff room for all staff to read in the feedback section in the communication folder. Phase 7: The change management plan will be evaluated at the end of the 6 month period the 30th of March 2015, to determine whether the change management plan has been effective. The evaluating process can be done through audit or feedback. The change agent will withdraw from the leader position after the final meeting but still work on the ward to provide ongoing consultation. The four ANUM will take over the role to ensure a good standard is maintained. The drop box will remain available for any further issues identified in the work place. Clear strategies for assessing the plan outcomes As previously mentioned, a final evaluation will be conducted after the final in-service utilising two main approaches to assess the plan outcome auditing and feedback. Auditing includes internal review and an external audit; feedback consists of nursing staff feedback and patients report. An internal review will be conducted four times through the following year. The ANUM are assigned to conduct the review. The Review includes comparing the medication charts with the incident reports to assess any correlation. For example, an omitted dose is considered a reportable mediation error and an incident report should exist correlatively. An external medication audit will be conducted by an external professional to provide a true and fair reflection of the situation (). It can occur annually, not only to assess the plan outcome, but to also monitor practices and identify areas for improvement. Frequency of auditing will depend on the rate of staff changing. However, every newly employed nurse will be given a printout to familiarise themselves with the change that has been made with an open-door policy encouraging queries. If significant non-compliance is identified in the auditing, it is suggested that the first phase of change management plan should be repeated to assess the necessity for modification of the current plan (Australian Commission on Safety and Quality in Health Care, 2014a). The drop box will still be available for anyone who experiences or witnesses medication errors, or have a better suggestion to improve practice. Submission is anonymous and confidential. Only the ANUM have access. Public feedback will be given to complaints and suggestions in a timely manner and in the form of a printout for all staff to read. Patients can be a source of reporting medication errors as some of them know what their regular medications are. Also, new side effects experienced by patients can reflect the inappropriate use of medication. Conclusion-highlight main issues 250 Need to be completed Barriers to report errors must be breached to accomplish a safer medication administration system. Reporting medication errors and near misses through an established reporting system can provide opportunities to reduce similar errors in the further nursing practice and alleviate costs involved in such adverse events. Several factors are necessary in the change management plan: a leader that is motivated and committed to make a change; a reporting system that makes nursing staff feel safe;
Wednesday, November 13, 2019
Egypt: The Gift Of The Nile :: essays research papers
The Nile, is the longest river in the world, and is located in northeastern Africa. Its principal source is Lake Victoria, in east central Africa. The Nile flows north through Uganda, Sudan, and Egypt to the Mediterranean Sea, with a total distance of 5584 km. From its remotest headstream in Burundi, the river is 6671 km long. The river basin covers an area of more than 3,349,000 sq km. Not only is the Nile considered a wonder by Herodotus, but by people all over the world, due to its impotance to the growth of a civilization.The first great African civilization developed in the northern Nile Valley in about 5000 BC. Dependent on agriculture, this state, called Egypt, relied on the flooding of the Nile for irrigation and new soils. It dominated vast areas of northeastern Africa for millennia. Ruled by Egypt for about 1800 years, the Kush region of northern Sudan subjugated Egypt in the 8th century BC. Pyramids, temples, and other monuments of these civilizations blanket the river valley in Egypt and northern Sudan.To Egypt, the Nile is seen as the fountain of life. Every year, between the months of June and October, the great rivers of the Nile rush north, and flood the highlands of Etiopia. The flooding surges of the land, and leaves behind water for the people, and fertile land, which can be used for agriculture. The impact the Nile has on Egypt during the ancient times and present are consierably apparent. The influence the Nile has is so extensive, that even the speech is transposed. For example, "To go north" in the Egyption language is the same as, "to go down stream"; "to go south" the same as "to go upstream." Also, the term for a "foreign country" in Egypt would be used as "highland" or "desert", because the only mountains or deserts would be far away, and foreign to them. The Nile certainly had an exceptional influence on Egypts, both lifestyle and thinking.The Nile also forced a change on the political system and ruling in Egypt. Because of the vast floods every year, the country needed a ruler that was capable of enforcing of the farmings and methods used. Such as the hoarding of the water and the stocking of the food harvested. Second, only a stongly cetralized administration could manafe the economy properly.
Monday, November 11, 2019
College Dorm vs Apartment
Going off to college after eighteen year of rules and restrictions underneath your parentsââ¬â¢ roof can be a very exciting experience, but is it all that it appears to be? There are many pros and cons when it comes to both living at home, and in a college dorm. Fortunately for me I have been able to experience living in all three and I can definitely say that living in a college dorm is the better option. At first glance a college dorm seems like the best thing that has ever happened to you especially since you will no longer have your parents there to nag you. There are many obvious advantages to living in a college dorm.One of these advantages is the most obvious, you donââ¬â¢t have to follow all the rules that your parents have laid out for you, of course there will still be rules in your dorm but you will still have a sense freedom. There will always be rules in society so you can never escape that. Another major advantage of moving away to a college dorm would definitely be the experience. When I went off to college I met so many different people, learned so many knew things, and had many experiences that I will remember forever. Another pro of living in a dorm is that you finally get to learn how to be independent and truly take care of yourself. Mom wonââ¬â¢t be there to wash your clothes or cook for you, so you easily gain knowledge on how to fend for yourself. Lastly, I feel an advantage of living in a dorm is that you learn how to prioritize and be more. You wonââ¬â¢t have the luxury of your parents telling you to do your homework, so being away gives you a sense of responsibility and it is basically up to you to make the right decisions. Along with the pros there are always cons. Living in a college dorm is not always the best option. There are definitely setbacks involved in living away from home. A major disadvantage is that college life can be very distracting. There are always going to be parties and other fun things going on which can easy take your mind away from that assignment you have due in the morning. Living in a dorm can possibly jeopardize your GPA. (unknown, 2005, para. 3) A college dorm can also be a disadvantage if you have a horrible roommate. You no longer have the luxury of having your own space which can be very uncomfortable, or even cause another distraction. Living in a dorm room can also be very costly, even if you donââ¬â¢t use all that you are paying for. For example you may pay for a meal plan, but not as much food as you are paying for. (Bram, 2011, para. 9) Although I would definitely choose living in a dorm versus staying at home, there is also a plus to staying under your parentââ¬â¢s roof. The number one advantage of staying at home, in my opinion, is that you have the opportunity to save extraordinary amounts of money. You donââ¬â¢t have to worry about the cost of the dorm, food or any other expenses and you could also get a job. Going away to college is very expensive, so staying at home just a little while longer definitely wonââ¬â¢t hurt you or your parentsââ¬â¢ pockets. With that being said, we can get a little too comfortable with not having to worry about things financially which can keep us wanting to stay at home longer. The longer you stay at home, the harder it will be to leave later, which I find to be a major con. If for some reason staying at home for a longer time becomes the only option for you, at least you will always be focused. Not being around your peers constantly can absolutely keep you focused, not to mention your parents who will consistently be on your back about keeping grades up. Staying at home is a major advantage when it comes to doing well. Sometimes you have to really list out the positive and negative things about a particular subject to find what the best option for you would be. When it comes to living in a dorm you have freedom and gain experience, but it can be costly. When it comes to living at home you will be more likely to perform better in school, but you will have to abide by your parents rules even as an adult. I looked at all of the pros and cons of each, and still believe that living in a college dorm is a better option, not only for the experience, but because it helps to better prepare you for the future.
Saturday, November 9, 2019
Definition and Examples of a Transition in Composition
Definition and Examples of a Transition in Composition In English grammar, a transition is a connection (a word, phrase, clause, sentence, or entire paragraph) between two parts of a piece of writing, contributing to cohesion. Transitional devices include pronouns, repetition, and transitional expressions, all of which are illustrated below. Pronunciation: trans-ZISH-en EtymologyFrom the Latin, to go across Examples and Observations Example:à At firstà a toy,à thenà a mode of transportation for the rich, the automobile was designed as mans mechanical servant.à Laterà it became part of the pattern of living. Here are some examples and insights from other writers: A transition should be short, direct, and almost invisible.Gary Provost, Beyond Style: Mastering the Finer Points of Writing. Writers Digest Books, 1988)A transition is anything that links one sentence- or paragraph- to another. Nearly every sentence, therefore, is transitional. (In that sentence, for example, the linking or transitional words are sentence, therefore, and transitional.) Coherent writing, I suggest, is a constant process of transitioning.(Bill Stott, Write to the Point: And Feel Better About Your Writing, 2nd ed. Columbia University Press, 1991) Repetition and Transitionsà In this example, transitions are repeated in the prose: The way I write is who I am, or have become, yet this is a case in which I wish I had instead of words and their rhythms a cutting room, equipped with an Avid, a digital editing system on which I could touch a key and collapse the sequence of time, show you simultaneously all the frames of memory that come to me now, let you pick the takes, the marginally different expressions, the variant readings of the same lines. This is a case in which I need more than words to find the meaning. This is a case in which I need whatever it is I think or believe to be penetrable, if only for myself. (Joan Didion, The Year of Magical Thinking, 2006) Pronouns and Repeated Sentence Structures Grief turns out to be a place none of us know until we reach it. We anticipate (we know) that someone close to us could die, but we do not look beyond the few days or weeks that immediately follow such an imagined death. We misconstrue the nature of even those few days or weeks. We might expect if the death is sudden to feel shock. We do not expect this shock to be obliterative, dislocating to both body and mind. We might expect that we will be prostrate, inconsolable, crazy with loss. We do not expect to be literally crazy, cool customers who believe that their husband is about to return. (Joan Didion, The Year of Magical Thinking, 2006)When you find yourself having difficulty moving from one section of an article to the next, the problem might be due to the fact that you are leaving out information. Rather than trying to force an awkward transition, take another look at what you have written and ask yourself what you need to explain in order to move on to your next section.(Gary Pr ovost, 100 Ways to Improve Your Writing. Mentor, 1972) Tips on Using Transitions After you have developed your essay into something like its final shape, you will want to pay careful attention to your transitions. Moving from paragraph to paragraph, from idea to idea, you will want to use transitions that are very clear- you should leave no doubt in your readers mind how you are getting from one idea to another. Yet your transitions should not be hard and monotonous: though your essay will be so well-organized you may easily use such indications of transitions as one, two, three or first, second, and third, such words have the connotation of the scholarly or technical article and are usually to be avoided, or at least supplemented or varied, in the formal composition. Use one, two, first, second, if you wish, in certain areas of your essay, but also manage to use prepositional phrases and conjunctive adverbs and subordinate clauses and brief transitional paragraphs to achieve your momentum and continuity. Clarity and variety together are what you want. (Winston W eathers and Otis Winchester, The New Strategy of Style. McGraw-Hill, 1978) Space Breaks as Transitions Transitions are usually not that interesting. I use space breaks instead, and a lot of them. A space break makes a clean segue whereas some segues you try to write sound convenient, contrived. The white space sets off, underscores, the writing presented, and you have to be sure it deserves to be highlighted this way. If used honestly and not as a gimmick, these spaces can signify the way the mind really works, noting moments and assembling them in such a way that a kind of logic or pattern comes forward, until the accretion of moments forms a whole experience, observation, state of being. The connective tissue of a story is often the white space, which is not empty. Thereââ¬â¢s nothing new here, but what you donââ¬â¢t say can be as important as what you do say. (Amy Hempel, interviewed by Paul Winner. The Paris Review, Summer 2003)
Wednesday, November 6, 2019
Extreme conditional value at risk a coherent scenario for risk management The WritePass Journal
Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE Extreme conditional value at risk a coherent scenario for risk management CHAPTER ONE1. INTRODUCTION1.1.BACKGROUND1.2à RSEARCH PROBLEM1.3à RELEVENCE OF THE STUDY1.4à RESEARCH DESIGNCHAPTER 2: RISK MEASUREMENT AND THE EMPIRICALDISTRIBUTION OF FINANCIAL RETURNS2.1à Risk Measurement in Finance: A Review of Its Origins2.2à Value-at-risk (VaR)2.2.1 Definition and concepts2.2.2 Limitations of VaR2.3à Conditional Value-at-Risk2.4à The Empirical Distribution of Financial Returns2.4.1à The Importance of Being Normal2.4.2 Deviations From NormalityCHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK?1.3. Extreme Value Theory3.1. The Block of Maxima Method3.2.à à The Generalized Extreme Value Distribution3.2.1. Extreme Value-at-Risk3.2.2.à Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of RiskCHAPTER 4: DATA DISCRIPTION.CHAPTER 5: DISCUSION OF EMPIRICAL RESULTSCHAPTER 6: CONCLUSIONSà References Related CHAPTER ONE 1. INTRODUCTION Extreme financial losses that occurred during the 2007-2008 financial crisis reignited questions of whether existing methodologies, which are largely based on the normal distribution, are adequate and suitable for the purpose of risk measurement and management. The major assumptions employed in these frameworks are that financial returns are independently and identically distributed, and follow the normal distribution. However, weaknesses in these methodologies has long been identified in the literature. Firstly, it is now widely accepted that financial returns are not normally distributed; they are asymmetric, skewed, leptokurtic and fat-tailed. Secondly, it is a known fact that financial returns exhibit volatility clustering, thus the assumption of independently distributed is violated. The combined evidence concerning the stylized facts of financial returns necessitates the need for adapting existing methodologies or developing new methodologies that will account for all the stylised facts of financial returns explicitly. In this paper, I discuss two related measures of risk; extreme value-at-risk (EVaR) and extreme conditional value-at-risk (ECVaR). I argue that ECVaR is a better measure of extreme market risk than EVaR utilised by Kabundi and Mwamba (2009) since it is coherent, and captures the effects of extreme markets events. In contrast, even though EVaR captures the effect of extreme market events, it is non-coherent. 1.1.BACKGROUND Markowitz (1952), Roy (1952), Shape (1964), Black and Scholes (1973), and Mertonââ¬â¢s (1973) major toolkit in the development of modern portfolio theory (MPT) and the field of financial engineering consisted of means, variance, correlations and covariance of asset returns. In MPT, the variance or equivalently the standard deviation was the panacea measure of risk. A major assumption employed in this theory is that financial asset returns are normally distributed. Under this assumption, extreme market events rarely happen. When they do occur, risk managers can simply treat them as outliers and disregard them when modelling financial asset returns. The assumption of normally distributed asset returns is too simplistic for use in financial modelling of extreme market events. During extreme market activity similar to the 2007-2008 financial crisis, financial returns exhibit behavior that is beyond what the normal distribution can model. Starting with the work of Mandelbrot (1963) there is increasingly more convincing empirical evidence that suggest that asset returns are not normally distributed. They exhibit asymmetric behavior, ââ¬Ëfat tailsââ¬â¢ and high kurtosis than the normal distribution can accommodate. The implication is that extreme negative returns do occur, and are more frequent than predicted by the normal distribution. Therefore, measures of risk based on the normal distribution will underestimate the risk of portfolios and lead to huge financial losses, and potentially insolvencies of financial institutions. To mitigate the effects of inadequate risk capital buffers stemming from underestimation of risk by normality-based financial modelling, risk measures such as EVaR that go beyond the assumption of normally distributed returns have been developed. However, EVaR is non-coherent just like VaR from which it is developed. The implication is that, even though it captures the effects of extreme mar ket events, it is not a good measure of risk since it does not reflect diversification ââ¬â a contradiction to one of the cornerstone of portfolio theory. ECVaR naturally overcomes these problems since it coherent and can capture extreme market events. 1.2à RSEARCH PROBLEM The purpose of this paper is to develop extreme conditional value-at-risk (ECVaR), and propose it as a better measure of risk than EVaR under conditions of extreme market activity with financial returns that exhibit volatility clustering, and are not normally distributed. Kabundi and Mwamba (2009) have proposed EVaR as a better measure of extreme risk than the widely used VaR, however, it is non-coherent. ECVaR is coherent, and captures the effect of extreme market activity, thus it is more suited to model extreme losses during market turmoil, and reflects diversification, which is an important requirement for any risk measure in portfolio theory. 1.3à RELEVENCE OF THE STUDY The assumption that financial asset returns are normally distributed understates the possibility of infrequent extreme events whose impact is more detrimental than that of events that are more frequent. Use of VaR and CVaR underestimate the riskiness of assets and portfolios, and eventually lead to huge losses and bankruptcies during times of extreme market activity. There are many adverse effects of using the normal distribution in the measurement of financial risk, the most visible being the loss of money due to underestimating risk. During the global financial crisis, a number of banks and non-financial institutions suffered huge financial losses; some went bankrupt and failed, partly because of inadequate capital allocation stemming from underestimation of risk by models that assumed normally distributed returns. Measures of risk that do not assume normality of financial returns have been developed. One such measure is EVaR (Kabundi and Mwamba (2009)). EVaR captures the effect of extreme market events, however it is not coherent. As a result, EVaR is not a good measure of risk since it does not reflect diversification. In financial markets characterised by multiple sources of risk and extreme market volatility, it is important to have a risk measure that is coherent and can capture the effect of extreme market activity. ECVaRà is advocated to fulfils this role of ensuring extreme market risk while conforming to portfolio theoryââ¬â¢s wisdom of diversification. 1.4à RESEARCH DESIGN Chapter 2 will present a literature review of risk measurement methodologies currently used by financial institutions, in particular, VaR and CVaR. I also discuss the strengths and weaknesses of these measures. Another risk measure not widely known thus far is the EVaR. We discuss EVaR as an advancement in risk measurement methodologies. I advocate that EVaR is not a good measure of risk since it is non-coherent. This leads to the next chapter, which presents ECVaR as a better risk measure that is coherent and can capture extreme market events. Chapter 3 will be concerned with extreme conditional value-at-risk (ECVaR) as a convenient modelling framework that naturally overcomes the normality assumption of asset returns in the modelling of extreme market events. This is followed with a comparative analysis of EVaR and ECVaR using financial data covering both the pre-financial crisis and the financial crisis periods. Chapter 4 will be concerned with data sources, preliminary data description, and the estimation of EVaR, and ECVaR. Chapter 5 will discuss the empirical results and the implication for risk measurement. Finally, chapter 6 will give concussions and highlight the directions for future research. CHAPTER 2: RISK MEASUREMENT AND THE EMPIRICAL DISTRIBUTION OF FINANCIAL RETURNS 2.1à Risk Measurement in Finance: A Review of Its Origins The concept of risk has been known for many years before Markowitzââ¬â¢s Portfolio Theory (MPT). Bernoulli (1738) solved the St. Petersburg paradox and derived fundamental insights of risk-averse behavior and the benefits of diversification.à In his formulation of expected utility theory, Bernoulli did not define risk explicitly; however, he inferred it from the shape of the utility function (Bulter et al. (2005:134); Brancinger Weber, (1997: 236)). Irving Fisher (1906) suggested the use of variance to measure economic risk. Von Neumann and Morgenstern (1947) used expected utility theory in the analysis of games and consequently deduced many of the modern understanding of decision making under risk or uncertainty.à Therefore, contrary to popular belief, the concept of risk has been known well before MPT. Even though the concept of risk was known before MPT, Markowitz (1952) first provided a systematic algorithm to measure risk using the variance in the formulation of the mean-variance model for which he won the Nobel Prize in 1990. The development of the mean-variance model inspired research in decision making under risk and the development of risk measures. The study of risk and decision making under uncertainty (which is treated the same as risk in most cases) stretch across disciplines. In decision science and psychology, Coombs and Pruitt (1960), Pruitt (1962), Coombs (1964), Coombs and Meyer (1969), and Coombs and Huang (1970a, 1970b) studied the perception of gambles and how their preference is affected by their perceived risk. In economics, finance and measurement theory, Markowitz (1952, 1959), Tobin (1958), Pratt (1964), Pollatsek Tversky (1970), Luce (1980) and others investigate portfolio selection and the measurement of risk of those portfolios, and gambles in general. T heir collective work produces a number of risk measures that vary in how they rank the riskiness of options, portfolios, or gambles. Though the risk measures vary, Pollatsek and Tversky (1970: 541) recognises that they share the following:à (1) Risk is regarded as a property of choosing among options. (2) Options can be meaningfully ordered according to their riskiness. (3) As suggested by Irving Fisher in 1906, the risk of an option is somehow related to the variance or dispersion in its outcomes. In addition to these basic properties, Markowitz regards risk as a ââ¬Ëbadââ¬â¢, implying something that is undesirable. Since Markowitz (1952), many risk measures such as the semi-variance, absolute deviation, and the lower semi-variance etc. (see Brachinger and Weber, (1997)) were developed, however, the variance continued to dominate empirical finance. It was in the 1990s that a new measure, VaR was popularised and became industry standard as a risk measure. I present this ris k measure in the next section. 2.2à Value-at-risk (VaR) 2.2.1 Definition and concepts Besides these basic ideas concerning risk measures, there is no universally accepted definition of risk (Pollatsek and Tversky, 1970:541); as a result, risk measures continue to be developed. J.P Morgan Reuters (1996) pioneered a major breakthrough in the advancement of risk measurement with the use of value-at-risk (VaR), and the subsequent Basel committee recommendation that banks could use it for their internal risk management. VaR is concerned with measuring the risk of a financial position due to the uncertainty regarding the future levels of interest rates, stock prices, commodity prices, and exchange rates. The risk resulting in the movement of these market factors is called market risk. VaR is the expected maximum loss of a financial position with a given level of confidence over a specified horizon. VaR provides answers to question: what is the maximum loss that I can lose over, say the next ten days with 99 percent confidence? Put differently, what is the maximum loss that will be exceeded only one percent of the times in the next ten day? I illustrate the computation of VaR using one of the methods that is available, namely parametric VaR. I denote by the rate of return and by the portfolio value at time. Then is given by (1) The actual loss (the negative of the profit, which is) is given by (2) When is normally distributed (as is normally assumed), the variable has a standard normal distribution with mean of and standard deviation of. We can calculate VaR from the following equation: (3) where implies a confidence level. If we assume a 99% confidence level, we have (4) Inà we have -2.33 as our VaR at 99% confidence level, and we will exceed this VaR only 1% of the times. From (4), it can be shown that the 99% confidence VaR is given byVaR (5)Generalising from (5), we can state the quantile VaR of the distribution as follows (6)VaR is an intuitive measure of risk that can be easily implemented. This is evident in its wide use in the industry. However, is it an optimal measure? The next section addresses the limitations of VaR. 2.2.2 Limitations of VaR Artzner et al. (1997,1999) developed a set of axioms that if satisfied by a risk measure, then that risk measure is ââ¬Ëcoherentââ¬â¢. The implication of coherent measures of risk is that ââ¬Å"it is not possible to assign a function for measuring risk unless it satisfies these axiomsâ⬠(Mitra, 2009:8). Risk measures that satisfy these axioms can be considered universal and optimal since they are founded on the same mathematical axioms that are generally accepted. Artzner et al. (1997, 1999) put forward the first axioms of risk measures, and any risk measure that satisfies them is a coherent measure of risk. Letting be a risk measure defined on two portfolios and. Then, the risk measure is coherent if it satisfies the following axioms: (1)à à Monotonicity:à if then We interpret the monotonicity axiom to mean that higher losses are associated with higher risk. (2)à à Homogeneity: à à for; Assuming that there is no liquidity risk, the homogeneity axiom mean that risk is not a function of the quantity of a stock purchased, therefore we cannot reduce or increase risk by investing different amounts in the same stock. (3)à à Translation invariance: , where is a riskless security; This means that investing in a riskless asset does not increase risk with certainty. (4)à à Sub-additivity:à Possibly the most important axiom, sub-additivity insures that a risk measure reflects diversification ââ¬â the combined risk of two portfolios is less than the sum of the risks of individual portfolios. VaR does not satisfy the most important axiom of sub-additivity, thus it is non-coherent. More so, VaR tells us what we can expect to lose if an extreme event does not occur, thus it does not tell us the extend of losses we can incur if a ââ¬Å"tailâ⬠event occurs. VaR is therefore not optimal measure of risk. The non-coherence, and therefor non-optimality of VaR as a measuring of risk led to the development of conditional value-at-risk (CVaR) by Artzner et al. (1997, 1999), and Uryasev and Rockafeller (1999). I discus CVaR in the next section. 2.3à Conditional Value-at-Risk CVaR is also known as ââ¬Å"Expected Shortfallâ⬠(ES),à à ââ¬Å"Tail VaRâ⬠, or ââ¬Å"Tail conditional expectationâ⬠, and it measures risk beyond VaR. Yamai and Yoshiba (2002) define CVaR as the conditional expectation of losses given that the losses exceed VaR. Mathematically, CVaR is given by the following: (7) CVaR offers more insights concerning risk that VaR in that it tells us what we can expect to lose if the losses exceed VaR. Unfortunately, the finance industry has been slow in adopting CVaR as its preferred risk measure. This is besides the fact that ââ¬Å"the actuarial/insurance community has tended to pick up on developments in financial risk management much more quickly than financial risk managers have picked up on developments in actuarial scienceâ⬠(Dowd and Black (2006:194)). Hopefully, the effects of the financial crisis will change this observation. In much of the applications of VaR and CVaR, returns have been assumed to be normally distributed. However, it is widely accepted that returns are not normally distributed. The implication is that, VaR and CVaR as currently used in finance will not capture extreme losses. This will lead to underestimation of risk and inadequate capital allocation across business units. In times of market stress when extra capital is required, it will be inadequate. This may lead to the insolvency of financial institutions. Methodologies that can capture extreme events are therefore needed. In the next section, I discuss the empirical evidence on financial returns, and thereafter discuss extreme value theory (EVT) as a suitable framework of modelling extreme losses. 2.4à The Empirical Distribution of Financial Returns Back in 1947, Geary wrote, ââ¬Å"Normality is a myth; there never was, and never will be a normal distributionâ⬠(as cited by Krishnaiah (1980: 279). Today this remark is supported by a voluminous amount of empirical evidence against normally distributed returns; nevertheless, normality continues to be the workhorse of empirical finance. If the normality assumption fails to pass empirical tests, why are practitioners so obsessed with the bell curve? Could their obsession be justified? To uncover some of the possible responses to these questions, let us first look at the importance of being normal, and then look at the dangers of incorrectly assuming normality. 2.4.1à The Importance of Being Normal The normal distribution is the widely used distribution in statistical analysis in all fields that utilises statistics in explaining phenomenon. The normal distribution can be assumed for a population, and it gives a rich set of mathematical results (Mardia, 1980: 279). In other words, the mathematical representations are tractable, and are easy to implement. The populations can simply be explained by its mean and variance when the normal distribution is assumed. The panacea advantage is that the modelling process under normality assumption is very simple. In fields that deal with natural phenomenon, such as physics and geology, the normal distribution has unequivocally succeeded in explaining the variables of interest. The same cannot be said in the finance field. The normal probability distribution has been subject to rigorous empirical rejection. A number of stylized facts of asset returns, statistical tests of normality and the occurrence of extreme negative returns disputes the normal distribution as the underlying data generating process for asset returns. We briefly discuss these empirical findings next. 2.4.2 Deviations From Normality Ever since Mandelbrot (1963), Fama (1963), Fama (1965) among others, it is a known fact that asset returns are not normally distributed. The combined empirical evidence since the 1960s points out the following stylized facts of asset returns: (1)à à Volatility clustering: periods of high volatility tend to be followed by periods of high volatility, and period of low volatility tend to be followed by low volatility. (2)à à Autoregressive price changes: A price change depends on price changes in the past period. (3)à à Skewness: Positive prices changes and negative price changes are not of the same magnitude. (4)à à Fat-tails: The probabilities of extreme negative (positive) returns are much larger than predicted by the normal distribution. (5)à à Time-varying tail thickness: More extreme losses occur during turbulent market activity than during normal market activity. (6)à à Frequency dependent fat-tails: high frequency data tends to be more fat-tailed than low frequency data. In addition to these stylized facts of asset returns, extreme events of 1974 Germany banking crisis, 1978 banking crisis in Spain, 1990s Japanese banking crisis, September 2001, and the 2007-2008 US experience ( BIS, 2004) could not have happened under the normal distribution. Alternatively, we could just have treated them as outliers and disregarded them; however, experience has shown that even those who are obsessed with the Gaussian distribution could not ignore the detrimental effects of the 2007-2008 global financial crisis. With these empirical facts known to the quantitative finance community, what is the motivation for the continued use of the normality assumption? It could be possible that those that stick with the normality assumption know only how to deal with normally distributed data. It is their hammer; everything that comes their way seems like a nail! As Esch (2010) notes, for those that do have other tools to deal with non-normal data, they continue to use the normal distribution on the grounds of parsimony. However, ââ¬Å"representativity should not be sacrificed for simplicityâ⬠(Fabozzi et al., 2011:4). Better modelling frameworks to deal with extreme values that are characteristic of departures from normality have been developed. Extreme value theory is one such methodology that has enjoyed success in other fields outside finance, and has been used to model financial losses with success. In the next chapter, I present extreme value-based methodologies as a practical and better methodology to overcome non-normality in asset returns. CHAPTER 3: EXTREME VALUE THEORY: A SUITABLE AND ADEQUATE FRAMEWORK? 1.3. Extreme Value Theory Extreme value theory was developed to model extreme natural phenomena such as floods, extreme winds, and temperature, and is well established in fields such as engineering, insurance, and climatology. It provides a convenient way to model the tails of distributions that capture non-normal activities. Since it concentrates on the tails of distributions, it has been adopted to model asset returns in time of extreme market activity (see Embrechts et al. (1997); McNeil and Frey (2000); Danielsson and de Vries (2000). Gilli and Kellezi (2003) points out two related ways of modelling extreme events. The first way describes the maximum loss through a limit distribution known as the generalised extreme value distribution (GED), which is a family of asymptotic distributions that describe normalised maxima or minima.à The second way provides asymptotic distribution that describes the limit distribution of scaled excesses over high thresholds, and is known as the generalised Pareto distribution (GPD). The two limit distributions results into two approaches of EVT-based modelling the block of maxima method and the peaks over threshold method respectively[2]. 3.1. The Block of Maxima Method Let us consider independent and identically distributed (i.i.d) random variable à with common distribution function â⠱. Let be the maximum of the first random variables. Also, let us suppose is the upper end of. For, the corresponding results for the minima can be obtained from the following identity (8) almost surely converges to whether it is finite or infinite since, Following Embrechts et al. (1997), and Shanbhang and Rao (2003), the limit theory finds norming constants and a non-degenerate distribution function in such a way that the distribution function of a normalized version of converges to as follows;, as (9) is an extreme value distribution function, and â⠱ is the domain of attraction of, (written as), if equation (2) holds for suitable values of and. It can also be said that the two extreme value distribution functions and belong in the same family if for some à à à à à à à à à à and all. Fisher and Tippett (1928), De Haan (1970, 1976), Weissman (1978), and Embrechts et al. (1997) show that the limit distribution function belongs to one of the following three density functions for some. (10) (11) (12) Any extreme value distribution can be classified as one of the three types in (10), (11) and (12). à and à are the standard extreme value distribution and the corresponding random variables are called standard extreme random variables. For alternative characterization of the three distributions, see Nagaraja (1988), and Khan and Beg (1987). 3.2.à à The Generalized Extreme Value Distribution The three distribution functions given in (10), (11) and (12) above can be combined into one three-parameter distribution called the generalised extreme value distribution (GEV) given by,, with (13) We denote the GEV by, and the values andgive rise to the three distribution functions in (3). In equation (4) above, and represent the location parameter, the scale parameter, and the tail-shape parameter respectively. corresponds to the Frechet, and distributioncorresponds to the Weibull distribution. The case where reduces to the Gumbel distribution. To obtain the estimates of we use the maximum likelihood method, following Kabundi and Mwamba (2009). To start with, we fit the sample of maximum losses to a GEV. Thereafter, we use the maximum likelihood method to estimate the parameters of the GEV from the logarithmic form of the likely function given by; (14) To obtain the estimates of we take partial derivatives of equation (14) with respect to and, and equating them to zero. 3.2.1. Extreme Value-at-Risk The EVaR defined as the maximum likelihood à quantile estimator of, is by definition given by (15)à The quantity is the quantile of, and I denote it as the alpha percept VaR specified as follows following Kabundi and Mwamba (2009), and Embrech et al. (1997): (16) Even though EVaR captures extreme losses, by extension from VaR it is non-coherent. As such, it cannot be used for the purpose of portfolio optimization since it does not reflect diversification. To overcome this problem, In the next section, I extend CVaR to ECVaR so as to capture extreme losses coherently. 3.2.2.à Extreme Conditional Value-at-Risk (ECVaR): An Extreme Coherent Measure of Risk I extend ECVaR from EVaR in a similar manner that I used to extend CVaR from VaR. ECVaR can therefore be expressed as follows: (17) In the following chapter, we describe the data and its sources. CHAPTER 4: DATA DISCRIPTION. I will use stock market indexes of five advanced economies comprising that of the United States, Japan, Germany, France, and United Kingdom, and five emerging economies comprising Brazil, Russia, India, China, and South Africa. Possible sources of data that will be used are I-net Bride, Bloomberg, and individual country central banks. CHAPTER 5: DISCUSION OF EMPIRICAL RESULTS In this chapter, I will discuss the empirical results. Specifically, the adequacy of ECVaR will be discussed relative to that of EVaR. Implications for risk measurement will also be discussed in this chapter. CHAPTER 6: CONCLUSIONS This chapter will give concluding remarks, and directions for future research. à References [1] Markowitz, H.M.: 1952, Portfolio selection, Journal of Finance 7 (1952), 77-91 2 Roy, A.D.: 1952, Safety First and the Holding of Assets. Econometrica, vol. 20 no 3 p 431-449. 3 Shape, W.F.: 1964, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. The Journal of Finance, Vol. 19 No 3 p 425-442. 4 Black, F., and Scholes, M.: 1973, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, vol. 18 () 637-59. 5 Merton, R. C.: 1973, The Theory of Rational Option Pricing.à Bell Journal of Economics and Management Science, Spring. 6 Artzner, Ph., F. Delbaen, J.-M. Eber, And D. Heath .: 1997, Thinking Coherently, Risk 10 (11) 68ââ¬â71. 7 Artzner, Ph., Delbaen, F., Eber, J-M., And Heath , D.: 1999, Thinking Coherently. Mathematical Finance, Vol. 9, No. 3à 203ââ¬â228 8 Bernoulli, D.: 1954, Exposition of a new theory on the measurement of risk, Econometrica 22 (1) 23-36, Translation of a paper originally published in Latin in St. Petersburg in 1738. 9 Butler, J.C., Dyer, J.S., and Jia, J.: 2005, An Empirical Investigation of the Assumption of Risk ââ¬âValue Models. Journal of Risk and Uncertainty, vol. 30 (2), pp. 133-156. 10 Brachinger, H.W., and Weber, M.: 1997, Risk as a primitive: a survey of measures of perceived risk. OR Spektrum, Vol 19 () 235-250 [1] Fisher, I.: 1906, The nature of Capital and Income. Macmillan. 1[1] von Neumann, J. and Morgenstern, O.: 1947, Theory of games and economic behaviour, 2nd ed., Princeton University Press. [1]2 Coombs, C.H., and Pruitt, D.G.: 1960, Components of Risk in Decision Making: Probability and Variance preferences. Journal of Experimental Psychology, vol. 60 () pp. 265-277. [1]3 Pruitt, D.G.: 1962, Partten and Level of risk in Gambling Decisions. Psychological Review, vol. 69 ()( pp. 187-201. [1]4 Coombs, C.H.: 1964, A Theory of Data. New York: Wiley. [1]5à Coombs, C.H., and Meyer, D.E.: 1969, Risk preference in Coin-toss Games. Journal of Mathematical Psychology, vol. 6 () p 514-527. [1]6 Coombs, C.H., and Huang, L.C.: 1970a, Polynomial Psychophysics of Risk. Journal of Experimental psychology, vol 7 (), pp. 317-338. [1]7 Markowitz, H.M.: 1959, Portfolio Selection: Efficient diversification of Investment. Yale University Press, New Haven, USA. [1]8 Tobin, J. E.: 1958, liquidity preference as behavior towards risk. Review of Economic Studies p 65-86. [1]9 Pratt, J.W.: 1964, Risk Aversion in the Small and in the Large. Econometrica, vol. 32 () p 122-136. 20 Pollatsek, A. and Tversky, A.: 1970, A theory of Risk. Journal of Mathematical Psychology 7 (no issue) 540-553. 2[1] Luce, D. R.:1980, Several possible measures of risk. Theory and Decision 12 (no issue) 217-228. 22 J.P. Morgan and Reuters.: 1996, RiskMetrics Technical document. Available at http://riskmetrics.comrmcovv.html Accessedâ⬠¦ 23 Uryasev, S., and Rockafeller, R.T.: 1999, Optimization of Conditional Value-at-Risk. Available at gloriamundi.org 24 Mitra, S.: 2009, Risk measures in Quantitative Finance. Available on line. [Accessedâ⬠¦] 25 Geary, R.C.: 1947, Testing for Normality, Biometrika, vol. 34, pp. 209-242. 26 Mardia, K.V.: 1980, P.R. Krishnaiah, ed., Handbook of Statistics, Vol. 1. North-Holland Publishing Company. Pp. 279-320. 27 Mandelbrot, B.: 1963, The variation of certain speculative prices. Journal of Business, vol. 26, pp. 394-419. 28 Fama, E.: 1963, Mandelbrot and the stable paretian hypothesis. Journal of Business, vol. 36, pp. 420-429. 29 Fama, E.: 1965, The behavior of stock market prices. Journal of Business, vol. 38, pp. 34-105. 30 Esch, D.: 2010, Non-Normality facts and fallacies. Journal of Investment Management, vol. 8 (1), pp. 49-61. 3[1] Stoyanov, S.V., Rachev, S., Racheva-Iotova, B., Fabozzi, F.J.: 2011, Fat-tailed Models for Risk Estimation. Journal of Portfolio Management, vol. 37 (2). Available at iijournals.com/doi/abs/10.3905/jpm.2011.37.2.107 32 Embrechts, P., Uppelberg, C.K.L, and T. Mikosch.: 1997, Modeling extremal events for insurance and finance, Springer 33 McNeil, A. and Frey, R.: 2000, Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach, Journal of Empirical Finance, Volume 7, Issues 3-4, 271- 300. 34 Danielsson, J. and de Vries, C.: 2000, Value-at-Risk and Extreme Returns, Annales dEconomie et deb Statistique, Volume 60, 239-270. 35Gilli, G., and Kellezi, E.: (2003), An Application of Extreme Value Theory for Measuring Risk, Department of Econometrics, University of Geneva, Switzerland.à Available from: gloriamundi.org/picsresources/mgek.pdf 36 Shanbhag, D.N., and Rao, C.R.: 2003, Extreme Value Theory, Models and Simulation. Handbook of Statistics, Vol 21(). Elsevier Science B.V. 37 Fisher, R. A. and Tippett, L.H.C.: 1928, Limiting forms of the frequency distribution of the largest or smallest member of a sample. Proc. Cambridge Philos. Soc. Vol 24, 180-190. 38 De Haan, L.: 1970, On Regular Variation and Its Application to the Weak Convergence of Sample Extremes. Mathematical Centre Tract, Vol. 32. Mathematisch Centmm, Amsterdam 39 De Haan, L.: 1976, Sample extremes: an elementary introduction. Statistica Neerlandica, vol. 30, 161-172. 40 Weissman, I.: 1978, Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. vol. 73, 812-815. 4[1] Nagaraja, H. N.: 1988, Some characterizations of continuous distributions based on regressions of adjacent order statistics and record values. Sankhyà A 50, 70-73. 42 Khan, A. H. and Beg, M.I.: 1987, Characterization of the Weibull distribution by conditional variance. Snaky A 49, 268-271. 43 Kabundi, A. and Mwamba, J.W.M.: 2009, Extreme value at Risk: a Scenario for Risk management. SAJE Forthcoming.
Monday, November 4, 2019
Organizational Frames Essay Example | Topics and Well Written Essays - 1750 words
Organizational Frames - Essay Example Organizations are tools or instruments to meet goals and objectives, and to carry out tasks (Johnson, 2003). As such, structures in achieving calculable rational results as well as precision, stability, discipline, and reliability are in order (Max Weber, cited in Johnson, 2003). Frames or windows, for instance, filter and order the world, providing a structure from which to view things. In my role as an Organizational Analyst for the City of San Jose, I had recommended the merger of two small community centers that were less than two miles apart and were providing a similar range of programs and services. The recommendation was carried out and was considered in the Cityââ¬â¢s proposed operating budget. The concept, however, was poorly handled by the Parks and Recreation Department where the Departmentââ¬â¢s managers had decided not to release information about the potential merger to center staff or to the community prior to publication of the proposed operating budget. The Alma community therefore was shocked to find that their Center was slated for closure and the Alma employees were upset to learn that their jobs would be impacted. Recovering from the initial shock, participants from the Alma Center protested the closure and eventually convinced the City Council to drop the proposal. In the 1980s, Bolman and Deal (1991) developed one of the most useful organizational typologies for viewing and studying leadership. Synthesizing existing theories of leadership and organizations into four traditions, they came up with a taxonomy labeled as ââ¬Å"frames.ââ¬
Saturday, November 2, 2019
Business Assignmet Assignment Example | Topics and Well Written Essays - 4000 words - 1
Business Assignmet - Assignment Example There are 60 retail shops which the company uses to sell its shops and other accessories which it offers to its customers, these accessories include leather handbags, umbrellas, key fobs and other leather goods. Besides this, Cuero Ltd operates a distribution centre (Balcombe) not only helps them to distribute their own products but it also helps in distributing other associated products of its competitors, this gives Cuero Ltd a strong and dedicated distribution channel. Along with this distribution centre, the company runs a mail order business that helps in the promotion and marketing of the shoes and accessories produced and sold by the company. The performance of Cuero Ltd is to be appraised and evaluated on the basis of the 4 major organisational performance areas of analysis. These four major areas include; HR Performance issues; Financial Performance issues; Marketing and Supply Management issues. All of these areas of analysis are evaluated in separate individual reports to ascertain whether there are any issues/problems in that functioning area and how can that issue be resolved. This report analyses the performance of the people employed by Cuero Ltd. Besides the people directly by the company, other personnel whose activities may affect the business of Cuero Ltd is also considered. The Human Resource performance is appraised using the Critical incident method where the performance of the staff is appraised after assessing the positive and negative areas of their work. After the performance is analyzed, suitable recommendation as to the improvement of this function are also stated. Human resource is defined as "The people that staff and operate an organization"; as contrasted with the financial and material resources of an organization. Human Resources is also the organizational function that deals with the people and issues related to people such as compensation,
Subscribe to:
Posts (Atom)