tag:blogger.com,1999:blog-84372880260571091472024-03-04T22:30:57.342-08:00rizal - schoolstudy = read&tryyosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.comBlogger39125tag:blogger.com,1999:blog-8437288026057109147.post-84053588462151796122012-11-19T22:19:00.001-08:002012-11-19T22:27:36.432-08:00Complutense University of Madrid<br />
<div style="text-align: justify;">
The Complutense University of Madrid (Spanish: Universidad Complutense de Madrid, UCM, Latin: Universitas Complutensis) is a university in Madrid, and one of the oldest universities in the world. It is located on a sprawling campus that occupies the entirety of the Ciudad Universitaria district of Madrid, with annexes in the district of Somosaguas in the neighboring city of Pozuelo de Alarcón. According to the annual university rankings conducted by El Mundo, the Complutense University ranks as the topmost university in Spain, with its Schools of Philosophy, Spanish Literature, History, Pharmacy, Optometry, Journalism, Psychology, and Sociology holding the top national rankings. The University is also an affiliate of the Spanish Royal Societies of Physics and Mathematics.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The Complutense University of Madrid is a member of the Europaeum. Due to its long history in the capital, the Complutense University enjoys great support from Madrid-based institutions, at a local, national and international level. The School of Medicine operates the Hospital Clínico Universitario de San Carlos, as well as a number of other specialized clinics located on-campus, some of which are operated jointly with the Ministry of Health or perform specific research for the Ministry. The School of Medicine is not the only one with government involvement; indeed, despite past conflicts, the Complutense University shares a close bond with the Spanish government, as made evident by the fact that the presidential residence of La Moncloa and the Spanish Constitutional Court are both located directly on-campus (with the political center of the city at walking distance).</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The School of Communications, meanwhile, enjoys equally good relations with the press (large part of its professors being former reporters, editors, or directors of major Spanish and international newspapers). Moreover, the School is known particularly for its role as one of the main pre-screening locales for the nation; indeed, all major Spanish film productions are screened first before an audience of Complutense students, with the main actors or production figures of the films attending a post-screening press conference. Most recently, Blanca Portillo, Carmen Maura, Lola Dueñas and Yohana Cobo pre-screened Pedro Almodóvar's Volver; past pre-screening visitors have included director Santiago Segura, actor Alejo Sauras, and writer E. Annie Proulx. Each year, the Madrid Círculo de Bellas Artes extends special invitations to the Complutense students during its series of annual conferences featuring prominent philosophers, sociologists, and psychologists. Likewise, all of the faculties have been able to benefit greatly by lectures given by some of the most illustrious figures in recent history, of all fields, from singer-songwriter / Catalan activist Joan Manuel Serrat to historian Ernst Gombrich, from writer Umberto Eco to communist politician Santiago Carrillo. Alejandro Amenábar wrote his first film, Tesis, while still attending the Complutense University. All the on-campus scenes in the film were shot in the School of Communications, which Amenábar himself had attended, and the building itself serves as major device in the plot. Amenábar dropped out of the Complutense in part due to his antagonistic relationship with one of his professors, who kept failing him; as revenge, Amenábar named one of the main villains in Tesis, Professor Castro, after his teacher. Castro still teaches at the University. <span style="font-size: xx-small;">(wikipedia)</span></div>
yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-89564285408772652432012-10-24T23:28:00.001-07:002012-10-24T23:28:47.500-07:00University of Illinois<br />
<div style="text-align: justify;">
The University of Illinois at Urbana–Champaign (U of I, University of Illinois, UIUC, or simply Illinois) is a public research-intensive university in the U.S. state of Illinois. It is the flagship campus of the University of Illinois system. The University of Illinois at Urbana–Champaign is the second oldest public university in the state, second to Illinois State University, and is a founding member of the Big Ten Conference. The university is designated as a RU/VH Research University (very high research activities). The campus library system possesses the second-largest university library in the United States and the fifth-largest in the country overall. The university comprises 17 colleges that offer more than 150 programs of study. </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The campus is known for its landscape and architecture, as well as distinctive landmarks. The main research and academic facilities are divided almost exactly between the twin cities of Urbana and Champaign. The College of Agriculture, Consumer, and Environmental Sciences' research fields stretch south from Urbana and Champaign into Savoy and Champaign County. The university maintains formal gardens and a conference center in nearby Monticello at Allerton Park. The campus is based on the quadrangle design popular at many universities. Four main quads compose the center of the university and are arranged from north to south. The Beckman Quadrangle and the John Bardeen Quadrangle occupy the center of the Engineering Campus. Boneyard Creek flows through the John Bardeen Quadrangle, paralleling Green Street. The Beckman Quadrangle is primarily composed of research units and laboratories, and features a large solar calendar consisting of an obelisk and several copper fountains. The Main Quadrangle and South Quadrangle follow immediately after the John Bardeen Quad. The former makes up a large part of the Liberal Arts and Sciences portion of the campus, while the latter comprises many of the buildings of the College of ACES spread across the campus map.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
University housing for undergraduates is provided through twenty-two residence halls in both Urbana and Champaign. All incoming freshman are required to live in certified student housing their first year on campus.</div>
<div style="text-align: justify;">
All undergraduates within the University housing system are required to purchase some level of meal plan, although they are free to eat elsewhere if they choose. Graduate housing is usually offered through two graduate dormitories, restricted to those over twenty years of age, and through two university-owned apartment complexes. However, the recent record-sized freshman class has forced the housing division to convert one of the graduate dormitories into undergraduate housing. Students with disabilities are provided special housing options to accommodate their needs. The University of Illinois Urbana-Champaign is well known for being one of the first universities to provide accommodations for students with disabilities. There are a number of private dormitories around campus, as well as a few houses that are outside of the Greek system and offer a more communal living experience. The private dorms tend to be more expensive to live in compared to other housing options. Private, certified residences maintain reciprocity agreements with the University, allowing students to move between the public and private housing systems if they are dissatisfied with their living conditions. Most undergraduates choose to move into apartments or the Greek houses after their first or second year. The University Tenant Union offers advice on choosing apartments and the process of signing a lease. (<span style="background-color: white; font-family: sans-serif; line-height: 19.200000762939453px;"><span style="font-size: xx-small;">Wikipedia</span></span>)</div>
<div style="text-align: justify;">
<br /></div>
yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-27591263233985569822012-05-15T18:05:00.001-07:002012-05-15T18:05:08.197-07:00San Jose State University<br />
<div style="text-align: justify;">
What is now San Jose State University was originally established in 1857 as the Minns' Evening Normal School in San Francisco. The school was founded by George W. Minns.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In 1862, by act of the California legislature, Minns' Evening Normal School became the California State Normal School and graduated 54 women from a three-year program. The school eventually moved to San Jose in 1871 and was given Washington Square Park at Fourth and San Carlos Streets, where the campus remains to this day.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In 1881, the first branch campus of the California State Normal School was announced, which later became the University of California, Los Angeles (UCLA).A large bell was forged that year to commemorate the original California State Normal School location in San Jose. The bell was inscribed with the words "California State Normal School, A.D. 1881," and would sound on special occasions until 1946 when the college obtained new chimes.The original bell appears on the SJSU campus to this day, and is still associated with various student traditions and rituals.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In 1921, the school's name changed to the State Teachers College at San Jose. In 1935, the State Teachers Colleges became the California State Colleges, and the school's name was changed again, this time to San Jose State College. In 1972, upon meeting criteria established by the Board of Trustees and the Coordinating Council for Higher Education, SJSC was granted university status, and the name was changed to California State University, San Jose. Finally, in 1974, the California legislature voted to change the school's name to San Jose State University.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The SJSU main campus comprises approximately 55 buildings situated on a rectangular,154-acre (62.3 ha) area in downtown San Jose. The campus is bordered by San Fernando Street to the north, San Salvador Street to the south, South 4th Street to the west, and South 10th Street to the east. The South Campus, which is home to many of the school's athletics facilities, is located approximately 1.5 mi (2.4 km) south of the main campus on South 7th Street.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
California State Normal School did not receive a permanent home until it moved from San Francisco to San Jose in 1871. The original California State Normal School campus in San Jose consisted of several rectangular, wooden buildings with a central grass quadrangle. The wooden buildings were destroyed by fire in 1880 and were replaced by interconnected stone and masonry structures of roughly the same configuration in 1881. These buildings were declared unsafe following the 1906 San Francisco earthquake and were being torn down when an aftershock of the magnitude that was predicted to destroy the buildings occurred and no damage was observed. Accordingly, demolition was stopped, and the portions of the buildings still standing were made into four halls: Tower Hall, Morris Dailey Auditorium, Washington Square Hall, and Dwight Bentel Hall. These four structures remain standing to this day, and are the oldest buildings on campus.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Beginning in the fall of 1994, the on-campus segments of San Carlos Street, Seventh Street and Ninth Street were closed to automobile traffic and converted to pedestrian walkways and green belts within the campus. San Carlos Street was renamed Paseo San Carlos, Seventh Street became El Paseo de César Chávez, and Ninth Street is now called the Ninth Street Plaza. The project was completed in 1996. Completed in 1999, the Business Classroom Project was a US$16 million renovation of the James F. Boccardo Business Education Center.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Completed in 1999, the US$1.5 million Heritage Gateway project was unveiled. The privately-funded project featured construction of eight oversized gateways around the main campus perimeter. In the Fall of 2000, the SJSU Police Department, which is part of the larger California State University Police Department, opened a new on-campus, multi-level facility on 7th Street. The new US$177 million Dr. Martin Luther King, Jr. Library, which opened its doors on August 1, 2003, won the Library Journal's prestigious 2004 Library of the Year award, the publication’s highest honor. The King Library represents the first collaboration of its kind between a university and a major U.S. city. The library is eight stories high, has 475,000 square feet (44,100 m2) of floor space, and houses approximately 1.6 million volumes. San Jose's first public library occupied the same site from 1901 to 1936, and SJSU's Wahlquist Library occupied the site from 1961 to 2000, at which point it was torn down to begin construction of the King Library.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In 2002, three of SJSU's six red brick residence halls were demolished and replaced with the new Campus Village residence complex. The US$200 million housing facility comprises three buildings ranging from seven to 15 stories tall. The project increased student capacity for on-campus housing to roughly 3,500, and provides housing options for first-year students, upper-class students, graduate students and faculty, staff and guests of the university. Campus Village officially opened in 2005. In 2006, a US$2 million renovation of Tower Hall was completed. Tower Hall is the oldest and most recognizable building on campus. <span style="font-size: x-small;"><i>(wikipedia)</i></span></div>
yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-35977827846415149202012-05-11T11:24:00.001-07:002012-05-11T11:24:23.597-07:00San Pedro University<br />
<div style="text-align: justify;">
The University of San Pedro Sula (Universidad de San Pedro Sula, or better known as "La Privada" [Private] www.usps.edu) was founded in 1977 and authorized by a governmental executive order on August 21, 1978</div>
<div style="text-align: justify;">
Promoters of this first initiative in private higher education in Honduras was a group of persons representing the business, professional, and cultural sectors of San Pedro Sula, led by Mr. Jorge Emilio Jaar. These leaders were very interested in offering alternatives and new opportunities for professional education at a higher level for youth and other segments of the population who wanted to reach their goals of personal development in an atmosphere of freedom, democracy, and respect for human dignity.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Initially, they offered two majors: Business Administration (Administración de Empresas) and Law (Derecho). In the years following, they introduced others: Agriculture, Journalism (Periodismo) although now this is called Communication and Advertisement Sciences (Ciencias de la Comunicación y Publicidad), Banking Administration (Administración Bancaria), Architecture, Industrial Engineering (Ingeniería Industrial), Computer Science (Ciencias de la Computación) now called Information Management, Marketing (Mercadotecnia), Tourism (Administracion Turística). In total, the university offers ten majors.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In the university's first year there were only a dozen professors throughout the disciplines. Today, there are more than 300. In 2007 Universidad de San Pedro Sula together with Fundacion Educar, commenced the development of an educational media project to benefit the entire community in Central America, as well as the television and broadcasting industry in the region. Campus TV was inaugurated in November 2008. <span style="font-size: xx-small;">(wikipedia)</span></div>
yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-28727760577160088352012-04-12T23:33:00.002-07:002012-04-12T23:38:18.306-07:00University of the Basque Country<div style="text-align: justify;"><span >The University of the Basque Country (Basque - Euskal Herriko Unibertsitatea; Spanish - Universidad del País Vasco) is the only public university in the Basque Country, in Northern Spain. It has campuses over the three provinces of the autonomous community: Biscay Campus (in Leioa, Bilbao, Portugalete and Barakaldo), Gipuzkoa Campus (in San Sebastián and Eibar), and Álava Campus in Vitoria-Gasteiz. It is the main research institution in the Basque Country, carrying out 90% of the basic research made in that territory and taking advantage of the good industrial environment that the region constitutes.</span></div><div style="text-align: justify;"><span ><br /></span></div><div style="text-align: justify;"><span >Although there have been numerous institutes of learning in the Basque country over the centuries, starting with the Universidad Sancti Spiritus de Oñati, it was not until the 20th century that serious efforts were made to create an official university for the Basque people. The first of these opened its doors in Bilbao in 1938, largely thanks to the zeal of the Basque president (lehendakari) at the time, José Antonio Aguirre, an alumnus of the University of Deusto. However, this was during the Spanish Civil War, and an inopportune moment to open a centre of learning. The northwest of the Basque region mostly sided with the Republican movement at this time, earning the wrath of General Francisco Franco. Thus, when Franco's armies entered Bilbao in 1939, the fledgling university was shut down.</span></div><div style="text-align: justify;"><span ><br /></span></div><div style="text-align: justify;"><span >It was not until 1968 that another university in the Basque region was founded. In this year, the University of Bilbao was opened. In 1972, the Leioa premises were finished. They were in a remote place among cultivated fields. As in the case of the Somosaguas campus of the Complutense University of Madrid, the dictatorial authorities wanted to keep the rebellious students away from urban areas. In 1977, additional campuses sprang up in Álava and Gipuzkoa. Finally, in 1980, the university was officially designated to be the University of the Basque Country. </span><span style="font-family: Georgia, serif; ">As of 2005, 78 different degrees are offered, and the university's 48,000 students can choose from more than 1,300 subjects of study. One can study 43% of the courses in the Basque language. The university is now recognised as one of the foremost in Spain, both in terms of the number of degrees offered and the quality of the typical degree awarded.</span></div><div style="text-align: justify;"><span ><br /></span></div><div style="text-align: justify;"><span >Its motto is a Basque-language verse Eman ta zabal zazu ("Give and distribute [the fruit]"), from Gernikako Arbola, a Basque anthem from the 19th century. Its logo is an interpretation of the Guernica oak by the sculptor Eduardo Chillida. (wikipedia)</span></div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-31293252833575346422012-04-11T09:45:00.009-07:002012-04-11T09:51:16.547-07:00University of San Diego<div style="text-align: justify;"><span >Chartered in 1949, the university opened its doors to its first class of students in 1952 as the San Diego College for Women. Reverend Charles F. Buddy, D.D., then bishop of the Diocese of San Diego and Reverend Mother Rosalie Hill, RSCJ, a Superior Vicaress of the Society of the Sacred Heart of Jesus, chartered the institution from resources drawn from their respective organizations on a stretch of land known as "Alcalá Park," named for San Diego de Alcalá. In September 1954, the San Diego College for Men and the School of Law opened. These two schools originally occupied Bogue Hall on the same site of University High School, which would later become the home of the University of San Diego High School. Starting in 1954, Alcalá Park also served as the diocesan chancery office and housed the episcopal offices, until the diocese moved to a vacated Benedictine convent that was converted to a pastoral center. In 1957, Immaculate Heart Major Seminary and St. Francis Minor Seminary were moved into their newly completed facility, now known as Maher Hall. The Immaculata Chapel, now no longer affiliated with USD, also opened that year as part of the seminary facilities. For nearly two decades, these schools co-existed on Alcalá Park. Immaculate Heart closed at the end of 1968, when its building was renamed De Sales Hall; St. Francis remained open until 1970, when it was transferred to another location on campus, leaving all of the newly named Bishop Leo T. Maher Hall to the newly merged co-educational University of San Diego in 1972. Since then, the university has grown quickly and has been able to increase its assets and academic programs. The student body, the local community, patrons, alumni, and many organizations have been integral to the university's development.</span></div><div style="text-align: justify;"><span ><br /></span></div><div style="text-align: justify;"><span >Significant periods of expansion of the university, since the 1972 merger, occurred in the mid-1980s, as well as in 1998, when Joan B. Kroc, philanthropist and wife of McDonald's financier Ray Kroc, endowed USD with a gift of $25 million for the construction of the Institute for Peace & Justice. Another significant donation to the college came in the form of multi-million dollar gifts from weight-loss tycoon Jenny Craig, inventor Donald Shiley, investment banker and alumnus Bert Degheri, and an additional gift of $50 million Mrs. Kroc left the School of Peace Studies upon her death. These gifts helped make possible, respectively, the Jenny Craig Pavilion (an athletic arena), the Donald P. Shiley Center for Science and Technology, the Joan B. Kroc School of Peace Studies, and the Degheri Alumni Center. As a result, USD has been able to host the West Coast Conference (WCC) basketball tournament in 2002, 2003 and 2008, and hosted international functions such as the Kyoto Laureate Symposium at the Joan B. Kroc Institute for Peace & Justice and at USD's Shiley Theatre. Shiley's gift has provided the university with some additional, and more advanced, teaching laboratories than it had previously. In 2005, the university expanded the Colachis Plaza from the Immaculata along Marian Way to the east end of Hall, which effectively closed the east end of the campus to vehicular traffic. That same year, the student body approved plans for a renovation and expansion of the Hahn University Center which began at the end of 2007. The new Student Life Pavilion (SLP) opened in 2009 and hosts the university's new student dining area(s), offices for student organizations and event spaces. The Hahn University Center is now home to administrative offices, meeting and event spaces, and a new restaurant and wine bar, La Gran Terazza. USD's current enrollment is 7,800 undergraduate and graduate students.</span></div><div style="text-align: justify;"><span ><br /></span></div><div style="text-align: justify;"><span >Though a Catholic university, the school is no longer governed directly by the Diocese of San Diego. Today, a lay board of trustees governs the university's operations. However, the Bishop of San Diego, the Most Rev. Robert H. Brom, retains a seat as a permanent member and retains control of the school's designation of "Catholic."</span></div><div style="text-align: justify;"><span ><br /></span></div><div style="text-align: justify;"><span >USD offers more than 60 degrees at the bachelor's, master's, and doctoral levels. USD is divided into six schools and colleges. The College of Arts and Sciences and the School of Law are the oldest academic divisions at USD; the Joan B. Kroc School of Peace Studies is the university's newest school. USD offers an honors program at the undergraduate level, with approximately 300 students enrolled annually. <span >(wikipedia)</span></span></div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-71881792333569623742012-04-08T03:51:00.002-07:002012-04-08T03:53:49.422-07:00Biosynthesis and Cellulolysis<div><span >In vascular plants cellulose is synthesized at the plasma membrane by rosette terminal complexes (RTCs). The RTCs are hexameric protein structures, approximately 25 nm in diameter, that contain the cellulose synthase enzymes that synthesise the individual cellulose chains. Each RTC floats in the cell's plasma membrane and "spins" a microfibril into the cell wall. RTCs contain at least three different cellulose synthases, encoded by CesA genes, in an unknown stoichiometry. Separate sets of CesA genes are involved in primary and secondary cell wall biosynthesis. Cellulose synthesis requires chain initiation and elongation, and the two processes are separate. CesA glucosyltransferase initiates cellulose polymerization using a steroid primer, sitosterol-beta-glucoside, and UDP-glucose. Cellulose synthase utilizes UDP-D-glucose precursors to elongate the growing cellulose chain. A cellulase may function to cleave the primer from the mature chain.</span></div><div><span ><br /></span></div><div><span >Cellulolysis is the process of breaking down cellulose into smaller polysaccharides called cellodextrins or completely into glucose units; this is a hydrolysis reaction. Because cellulose molecules bind strongly to each other, cellulolysis is relatively difficult compared to the breakdown of other polysaccharides. Processes do exist however for the breakdown of cellulose such as the Lyocell process which uses a combination of heated water and acetone to break down the cellulose strands. Most mammals have only very limited ability to digest dietary fibres such as cellulose. Some ruminants like cows and sheep contain certain symbiotic anaerobic bacteria (like Cellulomonas) in the flora of the rumen, and these bacteria produce enzymes called cellulases that help the microorganism to break down cellulose; the breakdown products are then used by the bacteria for proliferation. The bacterial mass is later digested by the ruminant in its digestive system (stomach and small intestine). Similarly, lower termites contain in their hindguts certain flagellate protozoa which produce such enzymes; higher termites contain bacteria for the job. Some termites may also produce cellulase of their own. Fungi, which in nature are responsible for recycling of nutrients, are also able to break down cellulose.</span></div><div><span >The enzymes utilized to cleave the glycosidic linkage in cellulose are glycoside hydrolases including endo-acting cellulases and exo-acting glucosidases. Such enzymes are usually secreted as part of multienzyme complexes that may include dockerins and carbohydrate-binding modules. <span >(wikipedia)</span></span></div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-6556569600790723382012-03-23T14:47:00.006-07:002012-03-23T14:52:18.097-07:00Parallel universes<div style="text-align: justify; "><div><span >There could be "an ensemble of parallel universes" such that when the traveller kills the grandfather, the act took place in (or resulted in the creation of) a parallel universe where the traveler's counterpart never exists as a result. However, his prior existence in the original universe is unaltered. Succinctly, this explanation states that: if time travel is possible, then multiple versions of the future exist in parallel universes. This theory would also apply if a person went back in time to shoot himself, because in the past he would be dead as in the future he would be alive and well.</span></div><div><span ><br /></span></div><div><span >In quantum mechanics, the many-worlds interpretation suggests that every seemingly random quantum event with a non-zero probability actually occurs in all possible ways in different "worlds", so that history is constantly branching into different alternatives. The physicist David Deutsch has argued that if backwards time travel is possible, it should result in the traveler ending up in a different branch of history than the one he departed from.</span></div><div><span ><br /></span></div><div><span >M-theory is put forward as a hypothetical master theory that unifies the six superstring theories, although at present it is largely incomplete. One possible consequence of ideas drawn from M-theory is that multiple universes in the form of 3-dimensional membranes known as branes could exist side-by-side in a fourth large spatial dimension (which is distinct from the concept of time as a fourth dimension) - see Brane cosmology. However, there is currently no argument from physics that there would be one brane for each physically possible version of history as in the many-worlds interpretation, nor is there any argument that time travel would take one to a different brane.</span></div><div><span ><br /></span></div><div><span >Nonexistence theory; if one were to do something in the past that would cause their nonexistence, upon returning to the future, they would find themselves in a world where the effects of (and chain reactions thereof) their actions are not present, as the person never existed. Through this theory, they would still exist, though.</span></div><div><span ><br /></span></div><div><span >Closely related but distinct is the notion of the time line as self-healing. The time-traveller's actions are like throwing a stone in a large lake; the ripples spread, but are soon swamped by the effect of the existing waves. For instance, a time traveller could assassinate a politician who led his country into a disastrous war, but the politician's followers would then use his murder as a pretext for the war, and the emotional effect of that would cancel out the loss of the politician's charisma. Or the traveller could prevent a car crash from killing a loved one, only to have the loved one killed by a mugger, or fall down the stairs, choke on a meal, killed by a stray bullet, etc. Some science fiction stories suggest that any paradox would destroy the universe, or at least the parts of space and time affected by the paradox. The plots of such stories tend to revolve around preventing paradoxes.</span></div><div><span ><br /></span></div><div><span >While stating that if time travel is possible it would be impossible to violate the grandfather paradox, it goes further to state that any action taken that itself negates the time travel event cannot occur. The consequences of such an event would in some way negate that event, be it by either voiding the memory of what one is doing before doing it, by preventing the action in some way, or even by destroying the universe among other possible consequences. It states therefore that to successfully change the past one must do so incidentally. </span><span style="font-family: Georgia, serif; ">For example, if one tried to stop the murder of one's parents, he would fail. On the other hand, if one traveled back and did something else that as a result prevented the death of someone else's parents, then such an event would be successful, because the reason for the journey and therefore the journey itself remains unchanged preventing a paradox.</span></div><div><span >In addition, if this event had some colossal change in the history of mankind, and such an event would not void the ability or purpose of the journey back, it would occur, and would hold. In such a case, the memory of the event would immediately be modified in the mind of the time traveler.</span></div><div><span >An example of this would be for someone to travel back to observe life in Austria in 1887 and while there shoot five people, one of which was one of Hitler's parents. Hitler would therefore never have existed, but since this would not prevent the invention of the means for time travel, or the purpose of the trip, then such a change would hold. But for it to hold, every element that influenced the trip must remain unchanged. The Third Reich Would not exist and the world we know today would be completely different. This would void someone convincing another party to travel back to kill the people without knowing who they are and making the time line stick, because by being successful, they would void the first party's influence and therefore the second party's actions. <span >(wikipedia)</span></span></div></div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-66680857140059907102012-03-18T23:17:00.005-07:002012-03-18T23:28:33.855-07:00Cyclotrons, Betatrons and Synchrotrons<p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; text-align: -webkit-auto; background-color: rgb(255, 255, 255); "></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px; ">The earliest operational circular accelerators were cyclotrons, invented in 1929 by Ernest O. Lawrence at the University of California, Berkeley. Cyclotrons have a single pair of hollow 'D'-shaped plates to accelerate the particles and a single large dipole magnet to bend their path into a circular orbit. It is a characteristic property of charged particles in a uniform and constant magnetic field B that they orbit with a constant period, at a frequency called the cyclotron frequency, so long as their speed is small compared to the speed of light c. This means that the accelerating D's of a cyclotron can be driven at a constant frequency by a radio frequency (RF) accelerating power source, as the beam spirals outwards continuously. The particles are injected in the centre of the magnet and are extracted at the outer edge at their maximum energy.</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;">Cyclotrons reach an energy limit because of relativistic effects whereby the particles effectively become more massive, so that their cyclotron frequency drops out of synch with the accelerating RF. Therefore simple cyclotrons can accelerate protons only to an energy of around 15 million electron volts (15 MeV, corresponding to a speed of roughly 10% of c), because the protons get out of phase with the driving electric field. If accelerated further, the beam would continue to spiral outward to a larger radius but the particles would no longer gain enough speed to complete the larger circle in step with the accelerating RF. To accommodate relativistic effects the magnetic field needs to be increased to higher radii like it is done in isochronous cyclotrons. An example for an isochronous cyclotron is the PSI Ring cyclotron which is providing protons at the energy of 590 MeV which corresponds to roughly 80% of the speed of light. The advantage of such a cyclotron is the maximum achievable extracted proton current which is currently 2.2 mA. The energy and current correspond to 1.3 MW beam power which is the highest of any accelerator currently existing.</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;"><br /></span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;">Another type of circular accelerator, invented in 1940 for accelerating electrons, is the Betatron, a concept which originates ultimately from Norwegian-German scientist Rolf Widerøe. These machines, like synchrotrons, use a donut-shaped ring magnet (see below) with a cyclically increasing B field, but accelerate the particles by induction from the increasing magnetic field, as if they were the secondary winding in a transformer, due to the changing magnetic flux through the orbit.</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;">Achieving constant orbital radius while supplying the proper accelerating electric field requires that the magnetic flux linking the orbit be somewhat independent of the magnetic field on the orbit, bending the particles into a constant radius curve. These machines have in practice been limited by the large radiative losses suffered by the electrons moving at nearly the speed of light in a relatively small radius orbit.</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;"><br /></span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;">To reach still higher energies, with relativistic mass approaching or exceeding the rest mass of the particles (for protons, billions of electron volts or GeV), it is necessary to use a synchrotron. This is an accelerator in which the particles are accelerated in a ring of constant radius. An immediate advantage over cyclotrons is that the magnetic field need only be present over the actual region of the particle orbits, which is very much narrower than the diameter of the ring. (The largest cyclotron built in the US had a 184-inch-diameter (4.7 m) magnet pole, whereas the diameter of the LEP and LHC is nearly 10 km. The aperture of the two beams of the LHC is of the order of a millimeter.)</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;">However, since the particle momentum increases during acceleration, it is necessary to turn up the magnetic field B in proportion to maintain constant curvature of the orbit. In consequence synchrotrons cannot accelerate particles continuously, as cyclotrons can, but must operate cyclically, supplying particles in bunches, which are delivered to a target or an external beam in beam "spills" typically every few seconds.</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px;">Since high energy synchrotrons do most of their work on particles that are already traveling at nearly the speed of light c, the time to complete one orbit of the ring is nearly constant, as is the frequency of the RF cavity resonators used to drive the acceleration.</span></span></p><p style="margin-top: 0.4em; margin-right: 0px; margin-bottom: 0.5em; margin-left: 0px; "><span ><span style="line-height: 19px; ">Note also a further point about modern synchrotrons: because the beam aperture is small and the magnetic field does not cover the entire area of the particle orbit as it does for a cyclotron, several necessary functions can be separated. Instead of one huge magnet, one has a line of hundreds of bending magnets, enclosing (or enclosed by) vacuum connecting pipes. The design of synchrotrons was revolutionized in the early 1950s with the discovery of the strong focusing concept. The focusing of the beam is handled independently by specialized quadrupole magnets, while the acceleration itself is accomplished in separate RF sections, rather similar to short linear accelerators. Also, there is no necessity that cyclic machines be circular, but rather the beam pipe may have straight sections between magnets where beams may collide, be cooled, etc. This has developed into an entire separate subject, called "beam physics" or "beam optics". More complex modern synchrotrons such as the Tevatron, LEP, and LHC may deliver the particle bunches into storage rings of magnets with constant B, where they can continue to orbit for long periods for experimentation or further acceleration. The highest-energy machines such as the Tevatron and LHC are actually accelerator complexes, with a cascade of specialized elements in series, including linear accelerators for initial beam creation, one or more low energy synchrotrons to reach intermediate energy, storage rings where beams can be accumulated or "cooled" (reducing the magnet aperture required and permitting tighter focusing; see beam cooling), and a last large ring for final acceleration and experimentation. <span >(wikipedia)</span></span></span></p><p></p>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-81938761314438163762012-03-13T06:50:00.003-07:002012-03-13T06:54:03.406-07:00Arrhenius equation<div><span >The Arrhenius equation is a simple, but remarkably accurate, formula for the temperature dependence of the reaction rate constant, and therefore, rate of a chemical reaction. The equation was first proposed by the Dutch chemist J. H. van 't Hoff in 1884; five years later in 1889, the Swedish chemist Svante Arrhenius provided a physical justification and interpretation for it. Currently, it is best seen as an empirical relationship. It can be used to model the temperature-variance of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally-induced processes/reactions.</span></div><div><span >A historically useful generalization supported by the Arrhenius equation is that, for many common chemical reactions at room temperature, the reaction rate doubles for every 10 degree Celsius increase in temperature.</span></div><div><span ><br /></span></div><div><span >Common sense and chemical intuition suggest that the higher the temperature, the faster a given chemical reaction will proceed. Quantitatively this relationship between the rate a reaction proceeds and its temperature is determined by the Arrhenius Equation. At higher temperatures, the probability that two molecules will collide is higher. This higher collision rate results in a higher kinetic energy, which has an effect on the activation energy of the reaction. The activation energy is the amount of energy required to ensure that a reaction happens.</span></div><div><span ><br /></span></div><div><span >Both the Arrhenius activation energy and the rate constant k are experimentally determined, and represent macroscopic reaction-specific parameters that are not simply related to threshold energies and the success of individual collisions at the molecular level. Consider a particular collision (an elementary reaction) between molecules A and B. The collision angle, the relative translational energy, the internal (particularly vibrational) energy will all determine the chance that the collision will produce a product molecule AB. Macroscopic measurements of E and k are the result of many individual collisions with differing collision parameters. To probe reaction rates at molecular level, experiments have to be conducted under near-collisional conditions and this subject is often called molecular reaction dynamics. (<span >wikipedia</span>)</span></div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-31191501343504066392012-01-27T02:46:00.000-08:002012-01-27T02:51:57.997-08:00Saccharides, Lignin and LipidsMonosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix aldo-. Similarly, a ketone group is denoted by the prefix keto-. Examples of monosaccharides are the hexoses glucose, fructose, and galactose and pentoses, ribose, and deoxyribose Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for 2 different saccharides to differentially affect food intake. Disaccharides are formed when two monosaccharides, or two single simple sugars, form a bond with removal of water. They can be hydrolyzed to yield their saccharin building blocks by boiling with dilute acid or reacting them with appropriate enzymes. Examples of disaccharides include sucrose, maltose, and lactose. Polysaccharides are polymerized monosaccharides, complex, carbohydrates. They have multiple simple sugars. Examples are starch, cellulose, and glycogen. They are generally large and often have a complex branched connectivity. Because of their size, polysaccharides are not water-soluble, but their many hydroxy groups become hydrated individually when exposed to water, and some polysaccharides form thick colloidal dispersions when heated in water. Shorter polysaccharides, with 3 - 10 monomers, are called oligosaccharides. A fluorescent indicator-displacement molecular imprinting sensor was developed for discriminating saccharides. It successfully discriminated three brands of orange juice beverage. The change in fluorescence intensity of the sensing films resulting is directly related to the saccharide concentration.<br /><br />Lignin is a complex polyphenolic macromolecule composed mainly of beta-O4-aryl linkages. After cellulose, lignin is the second most abundant biopolymer and is one of the primary structural components of most plants. It contains subunits derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol and is unusual among biomolecules in that it is racemic. The lack of optical activity is due to the polymerization of lignin which occurs via free radical coupling reactions in which there is no preference for either configuration at a chiral center.<br /><br />Lipids are chiefly fatty acid esters, and are the basic building blocks of biological membranes. Another biological role is energy storage (e.g., triglycerides). Most lipids consist of a polar or hydrophilic head (typically glycerol) and one to three nonpolar or hydrophobic fatty acid tails, and therefore they are amphiphilic. Fatty acids consist of unbranched chains of carbon atoms that are connected by single bonds alone (saturated fatty acids) or by both single and double bonds (unsaturated fatty acids). The chains are usually 14-24 carbon groups long, but it is always an even number. <span style="font-size:78%;">(wikipdedia)</span>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-41119528032376728532012-01-26T15:22:00.000-08:002012-01-26T15:27:03.592-08:00E-Business: SCM, SCE and SCPOracle offers four Supply Chain Management (SCM) product lines that address requirements across procurement, order management, manufacturing, product lifecycle management, maintenance, logistics, and supply chain planning and execution. Regardless of your industry focus or individual supply chain model, Oracle provides SCM solutions with the flexibility to streamline all supply chain processes, then adopt industry and business best practices and leverage information for continuous improvement.<br /><br />E-Business Suite Supply Chain Execution family of applications supports the complete order to cash business process, capturing demand from any channel, providing inbound and outbound transportation management, and supporting large, complex distribution operations. A unified data model provides a single, accurate view of the entire supply chain execution process, so you can plan, manage, and control the flow and storage of goods, services, and related information from the point of origin to the point of consumption in order to meet customer requirements. And when Oracle Supply Chain Execution runs on Oracle technology, you speed implementation, optimize performance, streamline support, and maximize return on your investment.<br /><br />E-Business Suite Supply Chain Planning family of applications provides holistic planning capabilities, from long-range aggregate planning to short-term detail scheduling. A unified data model provides a single, accurate view of the entire planning process, so you can optimize the flow of materials, cash, and information. And when Oracle Supply Chain Planning runs on Oracle technology, you speed implementation, optimize performance, streamline support, and maximize return on your investment. <span style="font-size:78%;">(oracle)</span>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-85972477953612518042009-08-10T03:42:00.000-07:002009-08-10T03:44:02.789-07:00antimatterIn particle physics, antimatter is the extension of the concept of the antiparticle to matter, where antimatter is composed of antiparticles in the same way that normal matter is composed of particles. For example, an antielectron (a positron, an electron with a positive charge) and an antiproton (a proton with a negative charge) could form an antihydrogen atom in the same way that an electron and a proton form a normal matter hydrogen atom. Furthermore, mixing matter and antimatter would lead to the annihilation of both in the same way that mixing antiparticles and particles does, thus giving rise to high-energy photons (gamma rays) or other particle–antiparticle pairs.<br /><br />There is considerable speculation as to why the observable universe is apparently almost entirely matter, whether there exist other places that are almost entirely antimatter instead, and what might be possible if antimatter could be harnessed, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the greatest unsolved problems in physics. The process by which this asymmetry between particles and antiparticles developed is called baryogenesis.<br /><br />Almost every object observable from the Earth seems to be made of matter rather than antimatter. Many scientists believe that this preponderance of matter over antimatter (known as baryon asymmetry) is the result of an imbalance in the production of matter and antimatter particles in the early universe, in a process called baryogenesis. The amount of matter presently observable in the universe only requires an imbalance in the early universe on the order of one extra matter particle per billion matter-antimatter particle pairs.<br /><br />Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays impacting Earth's atmosphere (or any other matter in the solar system) produce minute quantities of antimatter in the resulting particle jets, which are immediately annihilated by contact with nearby matter. It may similarly be produced in regions like the center of the Milky Way Galaxy and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the gamma rays produced when positrons annihilate with nearby matter.<br /><br />Recent observations by the European Space Agency's INTEGRAL (International Gamma-Ray Astrophysics Laboratory) satellite may explain the origin of a giant cloud of antimatter surrounding the galactic center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries, binary star systems containing black holes or neutron stars, mostly on one side of the galactic center. While the mechanism is not fully understood, it is likely to involve the production of electron-positron pairs, as ordinary matter gains tremendous energy while falling into a stellar remnant.<br /><br />(source: wikipedia)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-82230190750367503042009-06-17T05:31:00.000-07:002009-06-17T05:43:52.750-07:00nuclear fusion<div style="text-align: justify;">In nuclear physics and nuclear chemistry, nuclear fusion is the process by which multiple like-charged atomic nuclei join together to form a heavier nucleus. It is accompanied by the release or absorption of energy, which allows matter to enter a plasma state.<br /><br />The fusion of two nuclei with lower mass than iron (which, along with nickel, has the largest binding energy per nucleon) generally releases energy while the fusion of nuclei heavier than iron absorbs energy; vice-versa for the reverse process, nuclear fission. In the simplest case of hydrogen fusion, two protons have to be brought close enough for their mutual electric repulsion to be overcome by the nuclear force and the subsequent release of energy.<br /><br />Nuclear fusion occurs naturally in stars. Artificial fusion in human enterprises has also been achieved, although has not yet been completely controlled. Building upon the nuclear transmutation experiments of Ernest Rutherford done a few years earlier, fusion of light nuclei (hydrogen isotopes) was first observed by Mark Oliphant in 1932; the steps of the main cycle of nuclear fusion in stars were subsequently worked out by Hans Bethe throughout the remainder of that decade. Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project, but was not successful until 1952. Research into controlled fusion for civilian purposes began in the 1950s, and continues to this day.<br /><br />Fusion reactions power the stars and produce all but the lightest elements in a process called nucleosynthesis. Although the fusion of lighter elements in stars releases energy, production of the heavier elements absorbs energy.<br /><br />When the fusion reaction is a sustained uncontrolled chain, it can result in a thermonuclear explosion, such as that generated by a hydrogen bomb. Reactions which are not self-sustaining can still release considerable energy, as well as large numbers of neutrons.<br /><br />Research into controlled fusion, with the aim of producing fusion power for the production of electricity, has been conducted for over 50 years. It has been accompanied by extreme scientific and technological difficulties, but has resulted in steady progress. At present, break-even (self-sustaining) controlled fusion reactions have been demonstrated in a few tokamak-type reactors around the world. These have enabled the creation of workable designs for a reactor which will deliver ten times more fusion energy than the amount needed to heat up plasma to required temperatures (see ITER which is scheduled to be operational in 2018).<br /><br />It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. This is because all nuclei have a positive charge (due to their protons), and as like charges repel, nuclei strongly resist being put too close together. Accelerated to high speeds (that is, heated to thermonuclear temperatures), they can overcome this electromagnetic repulsion and get close enough for the attractive nuclear force to be sufficiently strong to achieve fusion. The fusion of lighter nuclei, which creates a heavier nucleus and a free neutron, generally releases more energy than it takes to force the nuclei together; this is an exothermic process that can produce self-sustaining reactions.<br /><br />The energy released in most nuclear reactions is much larger than that in chemical reactions, because the binding energy that holds a nucleus together is far greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is 13.6 electron volts—less than one-millionth of the 17 MeV released in the D-T (deuterium-tritium) reaction shown in the diagram to the right. Fusion reactions have an energy density many times greater than nuclear fission; i.e., the reactions produce far greater energies per unit of mass even though individual fission reactions are generally much more energetic than individual ones, which are themselves millions of times more energetic than chemical reactions. Only direct conversion of mass into energy, such as that caused by the collision of matter and antimatter, is more energetic per unit of mass than nuclear fusion.<br /><br />(source: wikipedia)</div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-68757966847695021912009-05-30T21:49:00.000-07:002009-05-30T21:57:40.105-07:00The laws of thermodynamicsIn thermodynamics, there are four laws that do not depend on the details of the systems under study or how they interact. Hence these laws are very generally valid, can be applied to systems about which one knows nothing other than the balance of energy and matter transfer. Examples of such systems include Einstein's prediction, around the turn of the 20th century, of spontaneous emission, and ongoing research into the thermodynamics of black holes.<br /><br />These four laws are:<br />Zeroth law of thermodynamics, about thermal equilibrium:<br />If two thermodynamic systems are separately in thermal equilibrium with a third, they are also in thermal equilibrium with each other.<br />If we grant that all systems are (trivially) in thermal equilibrium with themselves, the Zeroth law implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems. This law is tacitly assumed in every measurement of temperature. Thus, if we want to know if two bodies are at the same temperature, it is not necessary to bring them into contact and to watch whether their observable properties change with time.<br /><br />First law of thermodynamics, about the conservation of energy:<br />The change in the internal energy of a closed thermodynamic system is equal to the sum of the amount of heat energy supplied to the system and the work done on the system.<br /><br />Second law of thermodynamics, about entropy:<br />The total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value.<br /><br />Third law of thermodynamics, about the absolute zero of temperature:<br />As a system asymptotically approaches absolute zero of temperature all processes virtually cease and the entropy of the system asymptotically approaches a minimum value; also stated as: "the entropy of all systems and of all states of a system is zero at absolute zero" or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes".<br /><br />The following has sometimes been called the "Fourth Law of Thermodynamics", about the transfer of heat energy between systems.<br /><br />Onsager reciprocal relations:<br />In connected thermodynamic systems which are in equilibrium neither for pressure nor temperature, heat flow between is caused by forces proportional with unit of pressure difference, and equal to the proportional density flow caused per unit of temperature difference.<br /><br />(source: wikipedia)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-3912542020762565652009-05-26T01:15:00.000-07:002009-05-26T01:36:41.594-07:00Security Proofs and Quantum AttackThe above is just a simple example of an attack. If Eve is assumed to have unlimited resources, for example classical and quantum computing power, there are many more attacks possible. BB84 has been proven secure against any attacks allowed by quantum mechanics, both for sending information using an ideal photon source which only ever emits a single photon at a time[12], and also using practical photon sources which sometimes emit multiphoton pulses. These proofs are unconditionally secure in the sense that no conditions are imposed on the resources available to the Eavesdropper, however there are other conditions required:<br />Eve cannot access Alice and Bob's encoding and decoding devices.<br />The random number generators used by Alice and Bob must be trusted and truly random (for example a Quantum random number generator).<br />The classical communication channel must be authenticated using an unconditionally secure authentication scheme.<br /><br />Quantum cryptography is vulnerable to a man-in-the-middle attack when used without authentication to the same extent as any classical protocol, since no principle of quantum mechanics can distinguish friend from foe. As in the classical case, Alice and Bob cannot authenticate each other and establish a secure connection without some means of verifying each other's identities (such as an initial shared secret). If Alice and Bob have an initial shared secret then they can use an unconditionally secure authentication scheme (such as Carter-Wegman) along with quantum key distribution to exponentially expand this key, using a small amount of the new key to authenticate the next session. Several methods to create this initial shared secret have been proposed, for example using a 3rd party or chaos theory.<br /><br />In the BB84 protocol Alice sends quantum states to Bob using single photons. In practice many implementations use laser pulses attenuated to a very low level to send the quantum states. These laser pulses contain a very small number of photons, for example 0.2 photons per pulse, which are distributed according to a Poissonian distribution. This means most pulses actually contain no photons (no pulse is sent), some pulses contain 1 photon (which is desired) and a few pulses contain 2 or more photons. If the pulse contains more than one photon, then Eve can split off the extra photons and transmit the remaining single photon to Bob. This is the basis of the photon number splitting attack, where Eve stores these extra photons in a quantum memory until Bob detects the remaining single photon and Alice reveals the encoding basis. Eve can then measure her photons in the correct basis and obtain information on the key without introducing detectable errors.<br /><br />Even with the possibility of a PNS attack a secure key can still be generated, as shown in the GLLP security proof, however a much higher amount of privacy amplification is needed reducing the secure key rate significantly (with PNS the rate scales as t2 as compared to t for a single photon sources, where t is the transmittance of the quantum channel).<br /><br />There are several solutions to this problem. The most obvious is to use a true single photon source instead of an attenuated laser. While such sources are still at a developmental stage QKD has been carried out successfully with them. However as current sources operate at a low efficiency and frequency key rates and transmission distances are limited. Another solution is to modify the BB84 protocol, as is done for example in the SARG04 protocol, in which the secure key rate scales as t3 / 2. The most promising solution is the decoy state idea[21], in which Alice randomly sends some of her laser pulses with a lower average photon number. These decoy states can be used to detect a PNS attack, as Eve has no way to tell which pulses are signal and which decoy. Using this idea the secure key rate scales as t, the same as for a single photon source. This idea has been implemented successfully in several QKD experiments, allowing for high key rates secure against all known attacks.<br /><br />Hacking attacks target imperfections in the implementation of the protocol instead of the protocol directly. If the equipment used in quantum cryptography can be tampered with, it could be made to generate keys that were not secure using a random number generator attack. Another common class of attacks is the Trojan horse attack which does not require physical access to the endpoints: rather than attempt to read Alice and Bob's single photons, Mallory sends a large pulse of light back to Alice in between transmitted photons. Alice's equipment reflects some of Mallory's light, revealing the state of Alice's polarizer. This attack is easy to avoid, for example using an optical isolator to prevent light from entering Alice's system, and all other hacking attacks can similarly be defeated by modifying the implementation. Apart from Trojan horse there are several other known attacks including faked state attacks, phase remapping attacks and time-shift attacks. The time-shift attack has even been successfully demonstrated on a commercial quantum crypto-system. This demonstration is the first successful demonstration of quantum hacking against a non-homemade quantum key distribution system.<br /><br />Because currently a dedicated fibre optic line (or line of sight in free space) is required between the two points linked by quantum cryptography, a denial of service attack can be mounted by simply cutting or blocking the line or, perhaps more surreptitiously, by attempting to tap it.<br /><br />(wikipedia)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-80715972562351042912009-05-17T18:25:00.000-07:002009-05-17T20:56:31.831-07:00Quantum CryptographyOr quantum key distribution (QKD), uses quantum mechanics to guarantee secure communication. It enables two parties to produce a shared random bit string known only to them, which can be used as a key to encrypt and decrypt messages.<br /><br />An important and unique property of quantum cryptography is the ability of the two communicating users to detect the presence of any third party trying to gain knowledge of the key. This results from a fundamental aspect of quantum mechanics: the process of measuring a quantum system in general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By using quantum superpositions or quantum entanglement and transmitting information in quantum states, a communication system can be implemented which detects eavesdropping. If the level of eavesdropping is below a certain threshold a key can be produced that is guaranteed to be secure (i.e. the eavesdropper has no information about), otherwise no secure key is possible and communication is aborted.<br /><br />The security of quantum cryptography relies on the foundations of quantum mechanics, in contrast to traditional public key cryptography which relies on the computational difficulty of certain mathematical functions, and cannot provide any indication of eavesdropping or guarantee of key security.<br /><br />Quantum cryptography is only used to produce and distribute a key, not to transmit any message data. This key can then be used with any chosen encryption algorithm to encrypt (and decrypt) a message, which can then be transmitted over a standard communication channel. The algorithm most commonly associated with QKD is the one-time pad, as it is provably secure when used with a secret, random key.<br /><br />Quantum communication involves encoding information in quantum states, or qubits, as opposed to classical communications use of bits. Usually, photons are used for these quantum states. Quantum cryptography exploits certain properties of these quantum states to ensure its security. There are several different approaches to quantum key distribution, but they can be divided into two main categories depending on which property they exploit.<br /><br />In contrast to classical physics, the act of measurement is an integral part of quantum mechanics. In general, measuring an unknown quantum state will change that state in some way. This is known as quantum indeterminacy, and underlies results such as the Heisenberg uncertainty principle, information-disturbance theorem and no cloning theorem. This can be exploited in order to detect any eavesdropping on communication (which necessarily involves measurement) and, more importantly, to calculate the amount of information that has been intercepted. The quantum states of two (or more) separate objects can become linked together in such a way that they must be described by a combined quantum state, not as individual objects. This is known as entanglement and means that, for example, performing a measurement on one object will affect the other. If an entangled pair of objects is shared between two parties, anyone intercepting either object will alter the overall system, allowing the presence of the third party (and the amount of information they have gained) to be determined.<br /><br />These two approaches can each be further divided into three families of protocols; discrete variable, continuous variable and distributed phase reference coding. Discrete variable protocols were the first to be invented, and they remain the most widely implemented. The other two families are mainly concerned with overcoming practical limitations of experiments. The two protocols described below both use discrete variable coding.<br /><br />(source: wikipedia)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-60343864709923252302009-05-10T22:39:00.000-07:002009-05-12T21:24:46.274-07:00Other Microcontroller Features<div style="text-align: justify;">Since embedded processors are usually used to control devices, they sometimes need to accept input from the device they are controlling. This is the purpose of the analog to digital converter. Since processors are built to interpret and process digital data, i.e. 1s and 0s, they won't be able to do anything with the analog signals that may be being sent to it by a device. So the analog to digital converter is used to convert the incoming data into a form that the processor can recognize. There is also a digital to analog converter that allows the processor to send data to the device it is controlling.<br /><br />In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is the Programmable Interval Timer, or PIT for short. A PIT just counts down from some value to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on, the heater on, etc.<br /><br />Time Processing Unit or TPU for short is a sophisticated timer. In addition to counting down, the TPU can detect input events, generate output events, and perform other useful operations. Dedicated Pulse Width Modulation (PWM) block makes it possible for the CPU to control power converters, resistive loads, motors, etc., without using lots of CPU resources in tight timer loops.<br /><br />Universal Asynchronous Receiver/Transmitter (UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU. For those wanting ethernet one can use an external chip like Crystal Semiconductor CS8900A, Realtek RTL8019, or Microchip ENC 28J60. All of them allow easy interfacing with low pin count.<br /><br />Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers.<br /><br />The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.<br /><br />Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose built for control applications. A microcontroller instruction set usually has many instructions intended for bit-wise operations to make control programs more compact.For example, a general purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a microcontroller could have a single instruction to provide that commonly-required function.<br /><br />Microcontrollers typically do not have a math coprocessor, so fixed point or floating point arithmetic are performed by program code. Microcontrollers were originally programmed only in assembly language, but various high-level programming languages are now also in common use to target microcontrollers. These languages are either designed specially for the purpose, or versions of general purpose languages such as the C programming language. Compilers for general purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.<br /><br />Many microcontrollers are so quirky that they effectively require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters are often used to hide such low level quirks.<br /><br />Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollers Intel 8052; BASIC and FORTH on the Zilog Z8 as well as some modern devices. Typically these interpreters support interactive programming.<br /><br />Simulators are available for some microcontrollers, such as in Microchip's MPLAB environment. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems. Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an In-circuit emulator via JTAG, allow debugging of the firmware with a debugger.<br /><br />(source: wikipedia)</div>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-2594777243328759272009-04-29T23:17:00.000-07:002009-04-29T23:25:20.727-07:00Microcontrolleralso called microcontroller unit, MCU or µC is a small computer on a single integrated circuit consisting of a relatively simple CPU combined with support functions such as a crystal oscillator, timers, watchdog, serial and analog I/O etc. Program memory in the form of NOR flash or OTP ROM is also often included on chip, as well as a, typically small, read/write memory. Microcontrollers are designed for small applications. Thus, in contrast to the microprocessors used in personal computers and other high-performance applications, simplicity is emphasized. Some microcontrollers may operate at clock frequencies as low as 32KHz, as this is adequate for many typical applications, enabling low power consumption (milliwatts or microwatts). They will generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. <p class="MsoNoSpacing" style="text-align: justify;">Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, remote controls, office machines, appliances, power tools, and toys. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control even more devices and processes.The majority of computer systems in use today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems. These are called embedded systems. While some embedded systems are very sophisticated, many have minimal requirements for memory and program length, with no operating system, and low software complexity. Typical input and output devices include switches, relays, solenoids, LEDs, small or custom LCD displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, or other recognizable I/O devices of a personal computer, and may lack human interaction devices of any kind. It is mandatory that microcontrollers provide real time response to events in the embedded system they are controlling. When certain events occur, an interrupt system can signal the processor to suspend processing the current instruction sequence and to begin an interrupt service routine (ISR). The ISR will perform any processing required based on the source of the interrupt before returning to the original instruction sequence. Possible interrupt sources are device dependent, and often include events such as an internal timer overflow, completing an analog to digital conversion, a logic level change on an input such as from a button being pressed, and data received on a communication link. Where power consumption is important as in battery operated devices, interrupts may also wake a microcontroller from a low power sleep state where the processor is halted until required to do something by a peripheral event.</p> <p class="MsoNoSpacing" style="text-align: justify;"><o:p></o:p>Microcontroller programs must fit in the available on-chip program memory, since it would be costly to provide a system with external, expandable, memory. Compilers and assembly language are used to turn high-level language programs into a compact machine code for storage in the microcontroller's memory. Depending on the device, the program memory may be permanent, read-only memory that can only be programmed at the factory, or program memory may be field-alterable flash or erasable read-only memory.</p> <p class="MsoNoSpacing" style="text-align: justify;"><o:p></o:p>(source: wikipedia)</p>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-51767018741905487082009-04-22T05:48:00.000-07:002009-04-22T06:20:39.764-07:00User View of Operating SystemsUser Interface: Help the user use the computer system productively, Provide consistent user interface services to application programs to lower learning curves and increase productivity, Choice of user interface depends on the kind of user.<br /><br />User Functions: Program execution, File commands, Mount and unmount devices, Printer spooling, Security, Inter-user communication, System Status, Program Services.<br /><br />Interface Design: CLI - Command Line Interface, Batch System Commands, Menu-Driven Interfaces, GUI - Graphical User Interface.<br /><br />Command Languages: Provide a mechanism to combine sequences of commands together. These pseudo-programs are known as scripts or batch files. Startup files – OS configuration, user preferences.<br />Features of Command Languages: Can accept input from the user and can output messages to I/O devices, Provide ability to create and manipulate variables, Include the ability to branch and loop, Ability to specify arguments to the program command and to transfer those arguments to variables within the program, Provide error detection and recovery.<br /><br />Menu-Driven Interface: No need to memorize commands, All available commands are listed, Menus can be nested, Low data requirements.<br /><br />Duocentric Interface: Focus on the document rather than the application being executed, Expand role of OS by moving capabilities from the application to system services. Example: click on document to run program, Effort to assure that every application program responds in similar ways to user actions.<br /><br />(source: John Wiley & Sons - Wilson Wong, Linda Senne, Bentley College)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-37526150639367544682009-04-12T19:42:00.000-07:002009-04-12T19:46:25.900-07:00WANs (Wide Area Networks)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxxYN9rtn7d7pXCUO8SJKPjUixdHQff9uH8B5mUWkH085rA7HR7BYcjVdh2YSu9ccEXySfa0xWy3Gh0yUcDDpIMx41DputB_6noAp3Qo9zKUTSqLKIoO4M_glCSI-nnGx_X8U43Hv6Hb0s/s1600-h/mesh-net.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 370px; height: 202px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxxYN9rtn7d7pXCUO8SJKPjUixdHQff9uH8B5mUWkH085rA7HR7BYcjVdh2YSu9ccEXySfa0xWy3Gh0yUcDDpIMx41DputB_6noAp3Qo9zKUTSqLKIoO4M_glCSI-nnGx_X8U43Hv6Hb0s/s400/mesh-net.gif" alt="" id="BLOGGER_PHOTO_ID_5324001847750004226" border="0" /></a><br />The term Wide Area Network (WAN) usually refers to a network which covers a large geographical area, and use communications circuits to connect the intermediate nodes. A major factor impacting WAN design and performance is a requirement that they lease communications circuits from telephone companies or other communications carriers. Transmission rates are typically 2 Mbps, 34 Mbps, 45 Mbps, 155 Mbps, 625 Mbps (or sometimes considerably more).<br /><br />Numerous WANs have been constructed, including public packet networks, large corporate networks, military networks, banking networks, stock brokerage networks, and airline reservation networks. Some WANs are very extensive, spanning the globe, but most do not provide true global coverage. Organisations supporting WANs using the Internet Protocol are known as Network Service Providers (NSPs). These form the core of the Internet.<br /><br />By connecting the NSP WANs together using links at Internet Packet Interchanges (sometimes called "peering points") a global communication infrastructure is formed. NSPs do not generally handle individual customer accounts (except for the major corporate customers), but instead deal with intermediate organisations whom they can charge for high capacity communications. They generally have an agreement to exchange certain volumes of data at a certain "quality of service" with other NSPs. So practically any NSP can reach any other NSP, but may require the use of one or more other NSP networks to reach the required destination. NSPs vary in terms of the transit delay, transmission rate, and connectivity offered.<br /><br />A typical network is shown in the figure above. This connects a number of End Systems (ES) (e.g. A, C, H, K) and a number of Intermediate Systems (IS) (e.g. B, D, E, F, G, I, J) to form a network over which data may be communicated between the End Systems (ES).<br /><br />The characteristics of the transmission facilities lead to an emphasis on efficiency of communications techniques in the design of WANs. Controlling the volume of traffic and avoiding excessive delays is important. Since the topologies of WANs are likely to be more complex than those of LANs, routing algorithms also receive more emphasis. Many WANs also implement sophisticated monitoring procedures to account for which users consume the network resources. This is, in some cases, used to generate billing information to charge individual users.<br /><br />(source: erg.abdn.ac.uk)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-68445642012725373742009-03-16T01:28:00.000-07:002009-03-16T01:41:21.798-07:00Secure BGP Project<p class="MsoNormal" style="text-align: justify;"><span style="font-size:100%;">Internet routing is based on a distributed system composed of many routers, grouped into management domains called Autonomous Systems (ASes). Routing information is exchanged between ASes in Border Gateway Protocol (BGP) UPDATE messages. BGP is a critical component of the Internet's routing infrastructure. However, it is highly vulnerable to a variety of attacks due to the lack of a scalable means of verifying the authenticity and authorization of BGP control traffic. Secure BGP (S-BGP) addresses these vulnerabilities. The S-BGP architecture employs three security mechanisms. First, a Public Key Infrastructure (PKI) is used to support the authentication of ownership of IP address blocks, ownership of Autonomous System (AS) numbers, an AS's identity, and a BGP router's identity and its authorization to represent an AS. This PKI parallels the IP address and AS number assignment system and takes advantage of the existing infrastructure (Internet registries, etc.) Second, a new, optional, BGP transitive path attribute is employed to carry digital signatures (in "attestations") covering the routing information in a BGP UPDATE. These signatures along with certificates from the S-BGP PKI enable the receiver of a BGP routing UPDATE to verify the address prefixes and path information that it contains. Third, IPsec is used to provide data and partial sequence integrity, and to enable BGP routers to authenticate each other for exchanges of BGP control traffic. Under a previous contract with DARPA, a proof-of-concept prototype of S-BGP was developed and used to demonstrate the effectiveness and feasibility of deploying S-BGP. However, a major obstacle to the deployment of S-BGP is that it requires the participation of several distinct organizations -- the Internet registries, router vendors, and Internet service providers (ISPs). Because there will be no security benefits unless a few of each type of the organizations participate, each organization cannot justify the expense of investing in this new technology unless the others have also done so -- a classic chicken-and-egg problem. The goal of this project is to overcome these obstacles and promote deployment of S-BGP into the Internet. Deploying S-BGP will require working with the Internet registries and ISPs to set up the PKI; working with router vendors to implement the S-BGP enhancements (new path attribute, IPsec, etc.) on COTS routers; and convincing ISPs to buy and use these routers.</span></p><p class="MsoNormal" style="text-align: justify;">(source: www.ir.bbn.com)</p>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-64716757907577335242009-03-11T18:18:00.000-07:002009-03-11T18:26:22.734-07:00Software EngineeringSoftware Engineering is an approach to developing software that attempts to treat it as a formal process more like traditional engineering than the craft that many programmers believe it is. We talk of crafting an application, refining and polishing it, as if it were a wooden sculpture, not a series of logic instructions. The problem here is that you cannot engineer art. Programming falls somewhere between an art and a science.<br />Programming - Art or Engineering?<br />There has always been considerable debate about the nature of programming. If bridges were designed like software then there would be a lot of ferries operating. You can't have a second go if a bridge fails. That's the argument that the Software Engineering proponents put forward.<br /><br />How do I Stop my Software Killing Someone?<br />Manufacturers cannot build complex life-critical systems like aircraft, nuclear reactor controls, medical systems and expect the software to be thrown together. They require the whole process to be thoroughly managed, so that budgets can be estimated, staff recruited, and to minimize the risk of failure or expensive mistakes.<br /><br />In safety critical areas such as aviation,space, nuclear power plants,medicine, fire detection systems, and roller coaster rides the cost of failure can be enormous as lives are at risk. A divide by zero error that brings down an aircraft is just not acceptable.<br />So It Never Goes Wrong?<br />In spite of this there have been a few high profile disasters. Ariane 5, a rocket system for delivering satellites into orbit blew up in June 1996, 40 seconds after takeoff due to an arithmetic overflow bug. The system had used specifications from an earlier rocket Ariane 4 without having been fully tested.<br /><br />What Is Computer Aided Software Engineering?<br />The whole design process has to be formally managed long before the first line of code is written. Enormous design documents- hundreds or thousands of pages long are produced using C.A.S.E. (Computer Aided Software Engineering) tools then converted into Design Specification documents which are used to design code.<br /><br />C.A.S.E suffers from the "not quite there yet" syndrome. There are no systems that can take a set of design constraints and requirements then generate code that satisfies all the requirements and constraints. Its far too complex a process. So the available C.A.S.E. systems manage parts of the lifecycle process but not all of it.<br />So it is Paper Work?<br />One distinguishing feature of Software Engineering is the paper trail that it produces. Designs have to be signed off by Managers and Technical Authorities all the way from top to bottom and the role of Quality Assurance is to check the paper trail. Many Software Engineers would admit that their job is around 70% paperwork and 30% code. It's a costly way to write software and this is why avionics in modern aircraft are so expensive.<br /><br />(source: cplus.about.com)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-9690270423481761142009-03-04T18:10:00.000-08:002009-03-04T18:14:44.256-08:00Knowledge ManagementKnowledge Management (KM) comprises a range of practices used in an organisation to identify, create, represent, distribute and enable adoption of insights and experiences. Such insights and experiences comprise knowledge, either embodied in individuals or embedded in organisational processes or practice. An established discipline since 1995, KM includes courses taught in the fields of business administration, information systems, management, and library and information sciences (Alavi & Leidner 1999). More recently, other fields, to include those focused on information and media, computer science, public health, and public policy, also have started contributing to KM research. Many large companies and non-profit organisations have resources dedicated to internal KM efforts, often as a part of their 'Business Strategy', 'Information Technology', or 'Human Resource Management' departments (Addicott, McGivern & Ferlie 2006). Several consulting companies also exist that provide strategy and advice regarding KM to these organisations.<br /><br />KM efforts typically focus on organisational objectives such as improved performance, competitive advantage, innovation, the sharing of lessons learned, and continuous improvement of the organisation. KM efforts overlap with Organisational Learning, and may be distinguished from by a greater focus on the management of knowledge as a strategic asset and a focus on encouraging the exchange of knowledge. KM efforts can help individuals and groups to share valuable organisational insights, to reduce redundant work, to avoid reinventing the wheel per se, to reduce training time for new employees, to retain intellectual capital as employees turnover in an organisation, and to adapt to changing environments and markets (McAdam & McCreedy 2000)(Thompson & Walsham 2004).<br /><br />KM efforts have a long history, to include on-the-job discussions, formal apprenticeship, discussion forums, corporate libraries, professional training and mentoring programs. More recently, with increased use of computers in the second half of the 20th century, specific adaptations of technologies such as knowledge bases, expert systems, knowledge repositories, group decision support systems, and computer supported cooperative work have been introduced to further enhance the such efforts.<br /><br />In 1999, the term personal knowledge management was introduced which refers to the management of knowledge at the individual level (Wright 2005).Different frameworks for distinguishing between knowledge exist. One proposed framework for categorising the dimensions of knowledge distinguishes between tacit knowledge and explicit knowledge. Tacit knowledge represents internalised knowledge that an individual may not be consciously aware of how he or she accomplishes particular tasks. At the opposite end of the spectrum, explicit knowledge represents knowledge that the individual holds consciously in mental focus, in a form that can easily be communicated to others.(Alavi & Leidner 2001).<br /><br />Early research suggested that a successful KM effort needs to convert internalised tacit knowledge into explicit knowledge in order to share it, but the same effort must also permit individuals to internalise and make personally meaningful any codified knowledge retrieved from the KM effort. Subsequent research into KM suggested that a distinction between tacit knowledge and explicit knowledge represented an oversimplification and that the notion of explicit knowledge is self-contradictory. Specifically, for knowledge to be made explicit, it must be translated into information (i.e., symbols outside of our heads) (Serenko & Bontis 2004).<br /><br />A second proposed framework for categorising the dimensions of knowledge distinguishes between embedded knowledge of a system outside of a human individual (e.g., an information system may have knowledge embedded into its design) and embodied knowledge representing a learned capability of a human body’s nervous and endocrine systems.<br /><br />A third proposed framework for categorising the dimensions of knowledge distinguishes between the exploratory creation of "new knowledge" (i.e., innovation) vs. the transfer or exploitation of "established knowledge" within a group, organisation, or community. Collaborative environments such as communities of practice or the use of social computing tools can be used for both knowledge creation and transfer.<br /><br />Knowledge may be accessed at three stages: before, during, or after KM-related activities. Different organisations have tried various knowledge capture incentives, including making content submission mandatory and incorporating rewards into performance measurement plans. Considerable controversy exists over whether incentives work or not in this field and no consensus has emerged.<br /><br />One strategy to KM involves actively managing knowledge. In such an instance, individuals strive to explicitly encode their knowledge into a shared knowledge repository, such as a database, as well as retrieving knowledge they need that other individuals have provided to the repository. Another strategy to KM involves individuals making knowledge requests of experts associated with a particular subject on an ad hoc basis. In such an instance, expert individual(s) can provide their insights to the particular person or people needing this (Snowden 2002)<br /><br />(source: en.wikipedia.org)yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0tag:blogger.com,1999:blog-8437288026057109147.post-91588121612033665712009-02-22T20:53:00.000-08:002009-02-22T20:56:44.253-08:00Software DesignDesigning software is an exercise in managing complexity. The complexity exits within the software design itself, within the software organization of the company, and within the industry as a whole. Software design is very similar to systems design. It can span multiple technologies and often involves multiple sub-disciplines. Software specifications tend to be fluid, and change rapidly and often, usually while the design process is still going on. Software development teams also tend to be fluid, likewise often changing in the middle of the design process. In many ways, software bears more resemblance to complex social or organic systems than to hardware. All of this makes software design a difficult and error prone process. None of this is original thinking, but almost 30 years after the software engineering revolution began, software development is still seen as an undisciplined art compared to other engineering professions.<br /><br />The general consensus is that when real engineers get through with a design, no matter how complex, they are pretty sure it will work. They are also pretty sure it can be built using accepted construction techniques. In order for this to happen, hardware engineers spend a considerable amount of time validating and refining their designs. Consider a bridge design, for example. Before such a design is actually built the engineers do structural analysis; they build computer models and run simulations; they build scale models and test them in wind tunnels or other ways. In short, the designers do everything they could think of to make sure the design is a good design before it is built. The design of new airliner is even worse; for those, full scale prototypes must be built and test flown to validate the design predictions.<br /><br />It seems obvious to most people that software designs do not go through the same rigorous engineering as hardware designs. However, if we consider source code as design, we see that software designers actually do a considerable amount of validating and refining their designs. Software designers do not call it engineering, however, we call it testing and debugging. Most people do not consider testing and debugging as real "engineering"; certainly not in the software business. The reason has more to do with the refusal of the software industry to accept code as design than with any real engineering difference. Mock-ups, prototypes, and bread-boards are actually an accepted part of other engineering disciplines. Software designers do not have or use more formal methods of validating their designs because of the simple economics of the software build cycle.<br /><br />it is cheaper and simpler to just build the design and test it than to do anything else. We do not care how many builds we do -- they cost next to nothing in terms of time and the resources used can be completely reclaimed later if we discard the build. Note that testing is not just concerned with getting the current design correct, it is part of the process of refining the design. Hardware engineers of complex systems often build models (or at least they visually render their designs using computer graphics). This allows them to get a "feel" for the design that is not possible by just reviewing the design itself. Building such a model is both impossible and unnecessary with a software design. We just build the product itself. Even if formal software proofs were as automatic as a compiler, we would still do build/test cycles. Ergo, formal proofs have never been of much practical interest to the software industry.<br /><br />This is the reality of the software development process today. Ever more complex software designs are being created by an ever increasing number of people and organizations. These designs will be coded in some programming language and then validated and refined via the build/test cycle. The process is error prone and not particularly rigorous to begin with. The fact that a great many software developers do not want to believe that this is the way it works compounds the problem enormously.<br /><br />Most current software development processes try to segregate the different phases of software design into separate pigeon-holes. The top level design must be completed and frozen before any code is written. Testing and debugging are necessary just to weed out the construction mistakes. In between are the programmers, the construction workers of the software industry. Many believe that if we could just get programmers to quit "hacking" and "build" the designs as given to them (and in the process, make fewer errors) then software development might mature into a true engineering discipline. Not likely to happen as long as the process ignores the engineering and economic realities.<br /><br /><span style="font-style: italic;">(source: bleading-edge.com)</span>yosuke kurenaihttp://www.blogger.com/profile/10895181225297859834noreply@blogger.com0