text
stringlengths 189
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
When it is highly probable that some accounts will prove uncollectible and the dollar amount can be reasonably estimated, estimates of bad debt expense should be made and recorded in the period in which the sale takes place. Two methods of accounting for uncollectible accounts are used in practice-the allowance method and the direct write-off method.
When the seller can make a reasonable estimate of the dollar amount to be written off, the allowance method should be used. The allowance method provides an expense for uncollectible receivables in advance of their write-off. The use of the allowance method serves two purposes. First, it reduces the value of the receivables to the amount of cash expected to be realized in the future. Second, it matches the uncollectible expense of the current period with the related revenues of the period.
The allowance for uncollectible accounts is reported on the balance sheet as a deduction from accounts receivable and is called a contra asset account. Because the receivables are reported net of the allowance, the net receivables balance is the amount of cash that is expected to be collected in the near future and thus satisfies the financial reporting objective of providing information about future cash inflows to the company.
The estimate of uncollectibles at the end of a fiscal period should be based on past experience and forecasts of future business activity. When the general economic environment is favorable, the amount of the expense should normally be less than when the trend is in the opposite direction.
Listed below are the three generally accepted procedures that may be used in applying the allowance method.
1. Percentage of Credit Sales-This estimate of uncollectible accounts is based on a historically determined percentage of each period's credit sales. For example, if your company's experience indicates that ultimate uncollectible accounts average about two percent of credit sales, an adjusting entry would be made at year-end that expensed two percent of the receivables with an offsetting credit to the reserve for bad-debt.
2. Percentage of Ending Accounts Receivable-Under this method the percentage of the ending balance of accounts receivable not expected to be collected is determined. The allowance account is then adjusted to equal this percentage. The method emphasizes valuation of the receivables at net realizable value on the balance sheet.
3. Aging of Accounts Receivable-This method is similar to Percentage of Ending Accounts Receivable, but it is a more precise variation. Aging considers that the longer a receivable is outstanding, the less likely it is to be collected. A separate estimate of the percentage of uncollectibles is applied to each age classification group instead of applying an overall percentage.
The allowance method emphasizes reporting uncollectible accounts expense in the period in which the sales occur. This emphasis on matching expenses with related revenue is the preferred method of accounting for uncollectible receivables.
In situations in which it is impossible to estimate, with reasonable accuracy, the uncollectibles at the end of the period, the direct write-off method should be used. Under the direct write-off method, no entries are made until a customer actually defaults on payment, at which time the uncollectible account receivable is written off; therefore, no allowance account is required.
Setting a reserve for uncollectibles is a valuation reserve, since it in effect writes-down accounts receivable to a probable liquidation value. Whether the reserve is adequate, is a matter of judgment. Often, conservative management prefers to establish an allowance for uncollectibles on the high side, so that any portion of the allowance not needed at the end of the accounting period can be released into the profit column after a reasonable balance has been retained for the new accounting period. This approach has distinct advantages if a serious number of insolvency's are encountered. On the other hand, it is sometimes criticized by auditors for being unrealistic and leading to understated profits in the course of the business year as the large allowance is accumulated.
Copyright 1999 Credit Research Foundation
CREDIT RESEARCH FOUNDATION and the CREDIT RESEARCH FOUNDATION logo are the trademarks of CREDIT RESEARCH FOUNDATION. The other trademarks, service marks, and logos used on the Site are trademarks of CREDIT RESEARCH FOUNDATION or others. Nothing on the Site shall be construed as granting, by implication, estoppel, or otherwise, any license or right to use any trademark without the prior written consent of CREDIT RESEARCH FOUNDATION. The designations ™, ®, ©, SM or any other intellectual property symbols reflect registration and/or use in the context of United States laws as they relate to such intellectual property symbols. | <urn:uuid:3ad459db-8aad-45f2-9de8-d67afad8968f> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.crfonline.org/orc/ca/ca-13.html",
"date": "2016-09-26T22:24:32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9202336668968201,
"token_count": 959,
"score": 2.859375,
"int_score": 3
} |
Walter Mondale faces the same problem that confronted many recent Democratic presidential candidates: To win they have had to demonstrate not only that they were willing to negotiate with the Soviet Union, but also that they would stand firm against the Soviets if the need arose.
Polls have consistently shown over recent years that most Americans want to see the United States negotiate on arms control and other issues with the Soviets. But the same polls have shown that most Americans don't trust the Soviets. Most want a strong defense.
Mr. Mondale has stressed that he, too, wants an effective defense. He acknowledges that the Soviets are ''tough adversaries'' and asserts that no agreement with them can ever be based on trust. Agreements must be mutual and verifiable.
But in his campaign, Mondale has placed the main emphasis on the need to talk with the Soviets and on his contention that President Reagan has failed to negotiate seriously. (Mr. Reagan has argued that he is ready to talk at any time. He also reminds his audiences that it is the Soviet Union which withdrew its negotiators from the Geneva arms control talks).
Mondale has committed himself to a US-Soviet summit within six months of taking office and to annual meetings thereafter. He says he is convinced that if US-Soviet talks are not moving forward, US-Soviet relations don't merely stand still; like a bicycle, they fall down. Reagan says past arms agreements with the Soviets have been flawed. Mondale believes that they have been vital and that more are needed.
In two other key areas, Mondale clashes sharply with the President:
Southern Africa. Mondale has criticized Reagan for ''cozying up'' to South Africa's apartheid regime. Reagan would argue that a friendlier attitude toward South Africa has given the US more influence over events.
Central America. Mondale stresses a need for reform and negotiation in Central America rather than the use of military force. Where the Democrat differs most sharply with Reagan is over aid to the contra rebels - Reagan calls them freedom fighters - and what Mondale describes as an illegal war in Nicaragua. Mondale would cut the aid to the rebels.
The Democratic contender also says that he would sharply reduce the US military presence in Central America. But Reagan has already cut the size of military maneuvers in Honduras. The number of US military advisers in El Salvador is relatively small. On this point, as on a number of others, once Mondale took office, his differences with Reagan might turn out to be narrower than now seems to be the case. Regarding the Middle East, Mondale claims that Reagan's plan of Sept. 1, 1982, ''torpedoed'' the Camp David process. But that process seemed to be going nowhere. Reagan's plan actually brought the President more closely into line with the Carter-Mondale approach to resolving the Arab-Israeli conflict. Mondale says he would move the US Embassy to Jerusalem, but that might prove difficult once he was in office and had to take Arab concerns into account.
Mondale would cut the sale of sophisticated weapons to the Arabs. But once in office, President Carter changed his mind on that issue. So did Reagan. | <urn:uuid:0d07ad37-4835-4e01-94f4-bab14d950ac6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.csmonitor.com/1984/1015/101519.html",
"date": "2016-09-27T00:08:03",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9773135185241699,
"token_count": 645,
"score": 2.515625,
"int_score": 3
} |
Two years after the National School Lunch Program made big strides toward providing students with more nutritious food, Northwest Suburban High School District 214 is considering leaving it behind. Other districts in the suburbs already have dropped out, too, as lunch sales have declined.
And the latest expansion of the lunch program by the Obama administration to include snacks and fundraisers has Superintendent Dave Schuler "a little cranked up," as reported by Melissa Silverberg. Schuler believes the district can do better striking out on its own.
Such a strong response may seem reason enough to pick apart one of the centerpieces of the first lady's anti-obesity campaign. However, that is not our intent, as the program may work well for other districts, particularly those with significant low-income populations. Instead, we point to the new rules as an opportunity for suburban school districts to see through a broader lens in steering children toward healthy living.
While the focus of school nutrition has long centered on lunch, there are many other areas for officials to consider for their food guidelines -- policies that sometimes are ill-defined, incomplete or even nonexistent.
It takes only one outspoken parent to point out the potential confusion and inconsistencies that can come without a clear policy for food in schools. Earlier this month a West Dundee mother of a first-grader in Community Unit District 300 complained that her child's teacher used candy as an incentive for good behavior, saying it promotes poor eating habits. Our story on her complaint caused a flurry of discussion on social media.
The incident also demonstrated that lunch is only one component of a school or district nutrition plan. Beyond guidelines for meals, school policies need to determine what can be sold at fundraisers, when and what kinds of in-class snacks are allowed, what is offered at school stores and vending machines, and how the rules will be enforced. They should define what constitutes "healthy" or "nutritious" foods. The policies also could outline the types of advertising and sponsorships allowed on school grounds.
In addition, children should be taught about nutritious food choices across many subject areas, not just in health class.
School nutrition is a topic that has simmered for years, but the child obesity problem has created a sense of urgency. We support initiatives that help keep students from consuming junk food and sugary drinks at school. Still, children cannot be expected to change their eating habits overnight. Moderate adaptations to food policies over time, with support from parents at home, are key to encouraging healthful choices.
District 214 officials are creating a program they think will give students both what they want for lunch and what their bodies need. As other districts review their own lunch programs, a look at the larger school nutrition picture would serve them and their students well. | <urn:uuid:1b246a79-3e2b-4097-9d6f-6efeaaf4d7ad> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.dailyherald.com/article/20140424/discuss/140429080/",
"date": "2016-09-26T22:44:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9695261716842651,
"token_count": 560,
"score": 2.53125,
"int_score": 3
} |
NASA Measures Air Pollution Above Bay Area to Determine Effect on Humans
February 4, 2013 6:25 AM
comment(s) - last by
NASA plans to measure hydrocarbons and nitrous oxide by flying as low as 1,000 feet and as high as 26,000 feet
will measure air pollution levels at different heights above the Bay Area in an effort to detect whether they are close to the Earth's surface -- where humans breathe.
Satellites are typically used to measure air quality, but a major issue with this method is that they cannot tell whether pollution is close to the ground where people breathe or if it's higher in the atmosphere.
NASA is working to change that now with its DISCOVER-AQ campaign, which stands for Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality. The program was launched by NASA in 2011.
NASA, which partnered with the Bay Area Air Quality Management District for this particular flight, will use planes to fly at various altitudes over the Bay Area to
predict air pollution levels
in regards to wind patterns, time of day, etc. It plans to measure hydrocarbons and nitrous oxide by flying as low as 1,000 feet and as high as 26,000 feet.
The Bay Area Air Quality Management District paid $28,000 for the Bay Area flight.
Science World Report
This article is over a month old, voting and posting comments is disabled
RE: New Ways to tax
2/5/2013 12:43:29 PM
which "bay" are they talking about..
"The whole principle [of censorship] is wrong. It's like demanding that grown men live on skim milk because the baby can't have steak." -- Robert Heinlein
NASA Gives Bigelow Aerospace $17.8M for Expandable ISS Module
January 14, 2013, 12:12 PM
NASA Study: 15% Of America's Air Pollution Is From Asia
March 19, 2008, 11:15 AM
EQ-Radio: A New Device for Wirelessly Detecting Emotions
September 26, 2016, 5:00 AM
Are you ready for this ? HyperDrive Aircraft
September 24, 2016, 9:29 AM
A is for Apples
September 23, 2016, 5:32 AM
FDA Cleared, Shockwave Lithoplasty for Peripheral Vascular Disease
September 22, 2016, 5:45 AM
UN Meeting to Tackle Antimicrobial Resistance
September 21, 2016, 9:52 AM
ADHD Diagnosis and Treatment in Children: Problem or Paranoia?
September 19, 2016, 5:30 AM
Most Popular Articles
5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Laptop or Tablet - Which Do You Prefer?
September 20, 2016, 6:32 AM
Update: Samsung Exchange Program Now in Progress
September 20, 2016, 5:30 AM
Smartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
Latest Blog Posts
Who is in Risk of Getting Oral Cancer?
Sep 23, 2016, 6:02 AM
France Bans Plastic Eating Utensils in Restaurants
Sep 18, 2016, 10:49 AM
Progress Against Acute Myeloid Leukemia
Sep 17, 2016, 5:30 AM
Apple Watch Series 2 - Number 1 in the Customer Satisfaction.
Sep 7, 2016, 6:19 PM
First Self-Driving Car debut on the streets of Singapore
Aug 28, 2016, 4:10 PM
More Blog Posts
Copyright 2016 DailyTech LLC. -
Terms, Conditions & Privacy Information | <urn:uuid:0b6f02c4-585f-410e-a724-2d135127d232> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.dailytech.com/article.aspx?newsid=29815&commentid=837007&threshhold=1&red=320",
"date": "2016-09-26T23:24:45",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8510638475418091,
"token_count": 795,
"score": 2.953125,
"int_score": 3
} |
A Graphtec GL820 data logger user could not understand why the resolution of the instrument was not better than 60 RPM. He connected the output of a reflective tape sensor to a discrete input programmed for REVOL, which applies an automatic scaling operation as follows:
Clearly from the above equation, RPM resolution can be no better than 60 RPM when only one pulse per revolution is present, which it was in his case. Regardless of how fast the shaft turns, resolution remains the same: 60 RPM. There are two ways around this problem.
The first and best solution is to add more pulses per revolution. If we double them to two, resolution increases to 30 RPM. Configure 60 pulses per resolution and resolution increases to 1 RPM. Of course, in some situations you may not have the luxury to increase pulses per revolution. Maybe the shaft diameter is too small, or maybe you’re simply out of reflective tape. Whatever, your only second option is to increase the sampling period.
Another way to determine RPM is to count pulses over an extended sampling interval (like several or more seconds), divide the count by the interval and multiply by 60. This approach is perhaps the only solution when very low RPM values with a single pulse per revolution are encountered. For example, assume a shaft turning at 113 RPM produces one pulse per revolution. Using the preferred approach above resolution is 60/113, or a horrible 53 percent of the full scale RPM range, and not at all useful. However, if we count revolutions over a 15 second interval we’d accumulate a total of 28 pulses. Dividing by the sampling interval (15) and multiplying by 60 yields 112, very close to the actual RPM value. The penalty of this approach, and in this example, is that RPM is updated only once every 15 seconds. That’s the price you pay for accuracy with only one pulse per revolution. | <urn:uuid:9f7d2551-5c20-49f3-944c-56f93cc4be57> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.dataq.com/blog/data-acquisition/how-to-maximize-rpm-measurement-resolution/",
"date": "2016-09-26T22:27:06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.911833643913269,
"token_count": 383,
"score": 2.6875,
"int_score": 3
} |
Definitions for lambdaˈlæm də
This page provides all possible meanings and translations of the word lambda
the 11th letter of the Greek alphabet
the craniometric point at the junction of the sagittal and lamboid sutures of the skull
(Phys.) A subatomic particle carrying no charge, having a mass equal to 2183 times that of an electron; it decays rapidly, typically forming a nucleon and a pion. MW10
Origin: [NL., fr. Gr. la`mbda.]
The eleventh letter of the Classical and Modern Greek, the twelfth of the Old Greek.
Unit representation of wavelength.
A lambda expression.
The junction of the lambdoid and sagittal sutures of the cranium
A lambda baryon
Origin: from Greek λάμδα
the name of the Greek letter /, /, corresponding with the English letter L, l
the point of junction of the sagittal and lambdoid sutures of the skull
Origin: [NL., fr. Gr. la`mbda.]
Lambda is the 11th letter of the Greek alphabet. In the system of Greek numerals lambda has a value of 30. Lambda is related to the Phoenician letter Lamed . Letters in other alphabets that stemmed from lambda include the Latin L and the Cyrillic letter El. The ancient grammarians and dramatists give evidence to the pronunciation as in Classical Greek times. In Modern Greek the name of the letter, Λάμδα, is pronounced; the spoken letter itself has the sound of as with Latinate "L". In early Greek alphabets, the shape and orientation of lambda varied. Most variants consisted of two straight strokes, one longer than the other, connected at their ends. The angle might be in the upper left, lower left, or top. Other variants had a vertical line with a horizontal or sloped stroke running to the right. With the general adoption of the Ionic alphabet, Greek settled on an angle at the top; the Romans, borrowing from Western alphabets, put the angle at the lower left. The HTML 4 character entity references for the Greek capital and small letter lambda are "Λ" and "λ" respectively. The Unicode number for lambda is 03BB.
Chambers 20th Century Dictionary
lam′da, n. the Greek letter corresponding to Roman l.—n. Lamb′dacism, a too frequent use of words containing l: a defective pronunciation of r, making it like l.—adjs. Lamb′doid, -al, shaped like the Greek capital Λ—applied in anatomy to the suture between the occipital and the two parietal bones of the skull. [Gr.,—Heb. lamedh.]
The numerical value of lambda in Chaldean Numerology is: 6
The numerical value of lambda in Pythagorean Numerology is: 6
Sample Sentences & Example Usage
Lambda Legal really shows that the judge is making Lambda Legal up as he goes along and not following the rules of the law.
Niki and Amy and their daughters became Indiana's first family when they bravely joined Lambda Legal's marriage case, which meant openly sharing very personal and painful parts of their journey together as Niki battled cancer, they brought this case and fought so hard because they loved each other and wanted their daughters to be treated with respect, just like any other family in Indiana.
Images & Illustrations of lambda
Translations for lambda
From our Multilingual Translation Dictionary
Get even more translations for lambda »
Find a translation for the lambda definition in other languages:
Select another language: | <urn:uuid:941b84a3-aec0-43f6-b8b6-732d2072378f> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.definitions.net/definition/lambda",
"date": "2016-09-26T22:37:51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8619722723960876,
"token_count": 782,
"score": 2.8125,
"int_score": 3
} |
Diabetes is a chronic (lifelong) disease marked by high levels of sugar in the blood.
There are three major types of diabetes:
Type 1 diabetes is usually diagnosed in childhood. Many patients are diagnosed when they are older than age 20. In this disease, the body makes little or no insulin. Daily injections of insulin are needed. The exact cause is unknown. Genetics, viruses, and autoimmune problems may play a role.
Type 2 diabetes is far more common than type 1. It makes up most of diabetes cases. It usually occurs in adulthood, but young people are increasingly being diagnosed with this disease. The pancreas does not make enough insulin to keep blood glucose levels normal, often because the body does not respond well to insulin.
Many people with type 2 diabetes do not know they have it, although it is a serious condition. Type 2 diabetes is becoming more common due to increasing obesity and failure to exercise. Gestational diabetes is high blood glucose that develops at any time during pregnancy in a woman who does not have diabetes.
The immediate goals are to treat diabetic ketoacidosis and high blood glucose levels. Because type 1 diabetes can start suddenly and have severe symptoms, people who are newly diagnosed may need to go to the hospital.
The long-term goals of treatment are to:
Prevent diabetes-related complications such as blindness, heart disease, kidney failure, and amputation of limbs
These goals are accomplished through:
Careful self testing of blood glucose levels
Meal planning and weight control
Medication or insulin use
Complementary and Alternative Medicine (CAM)
Several CAM treatments have been used for diabetes, but the ones with the best evidence behind them are:
Traditional Chinese medicine (TCM)—this can include acupuncture, herbs, or bodywork to stimulate the body's energy ("chi") and lower blood glucose. Dozens of studies of TCM (mostly the use of herbs) showing benefit for diabetes have been published in China, but most Western docs aren't aware of them. You can read about some of these studies here.
Herbal medicines—in addition to Chinese herbal medicine, six or more Western and Ayurvedic (Indian) herbs have shown benefits in various studies. I'll post another blog entry about herbs later on.
Other CAM treatments may not lower blood glucose, but may help with symptoms and complications of diabetes:
Hyperbaric oxygen therapy (HBOT) helps wounds heal. HBOT has greatly reduced the rate of foot amputations in several studies, such as this one. Yet it's rarely used.
Aromatherapy (sometimes called essential oils therapy or flower essence therapy) can help reduce stress symptoms and improve sleep.
Chiropractic and massage therapies can help reduce pain and improve mobility. | <urn:uuid:86d75f2f-ef72-40a2-b99f-74de68e80c07> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.densu.com/diabetescure.html",
"date": "2016-09-26T22:23:52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9417093396186829,
"token_count": 569,
"score": 3.203125,
"int_score": 3
} |
Dutch designer Maaike Roozenburg uses 3D printing to create replicas of antique objects enhanced with a layer of augmented reality information in her SmartReplicas project, reports Gabrielle Kennedy (+ slideshow).
Roozenburg has combined 3D scanning and printing with ceramic making techniques to create copies of fragile tea sets and other museum pieces.
Augmented reality markers are added to the objects, which, when read by a device like a smartphone or tablet, present information about the original artefact.
The combination of technologies will allow museum visitors to both handle identical objects to those kept behind glass and learn more about their history.
Related story: Design award contender exhibits copies of rivals' objects
"I think most people would agree that visiting a museum to look at teacups exhibited in glass boxes is boring," said Roozenburg. "Utensils were made to be used, to be presented with life, and in a museum format they lose their main purpose and obviously their soul."
The first trial of the technique took place after months of negotiation. Roozenburg initially approached Harold E. Henkes, an octogenarian Dutchman who owned the Netherlands’ largest collection of ancient glass objects, asking to copy his collection and enhance it with augmented reality technology – but everything he owned was with the Museum Boijmans van Beuningen.
Working with Jouke Verlinden, an assistant professor in computer aided engineering at TU Delft, Roozenburg came up with the idea to lay the objects on a bed – like a patient in a hospital – and send the bed into a tomography scanner. This technique was secure enough for the museum to agree to let her handle objects from its collection, with the first project focusing on seven antique cups.
"It took me six months to convince the museum that this would be safe," explained Roozenburg. "I was terrified. They were terrified. I took out insurance for the first time."
"It was very stressful when we were starting out, but now I drive around with the cups in my boot, and everybody is much more relaxed about the safety," Roozenburg said.
The scanner was used to create a 3D computer models of the cups from sectional images. This data was then used to 3D-print replicas and moulds for casting porcelain.
The results were exhibited in the Boijmans van Beuningen last summer, to mixed response.
"It really was a new idea so the hurdle was encouraging museum visitors to get out their electronic devices and engage," Roozenburg said. "The reaction underscored just how passive a visit to the museum has become. People were just not used to it."
Roozenburg is now collaborating with creative agency LikeFriends, which specialises in information architecture, to improve the virtual reality interface.
"They are the ones who can really make the augmented reality component work better. I think it became more and more important to get this project out of the cultural sector if it was ever going to be really produced," said Roozenburg.
LikeFriends are working on bringing together illustration, animation, photography, sound effects and video projection in 2D and 3D layers to create a story for each object.
"How these cups were produced, how wealthy people drank tea and how pieces of the set came to be a part of the collections of museums as far apart as the Boijmans in Rotterdam, the V&A in London and the Idemitsu museum in Tokyo will be revealed," said Roozenburg.
"Some people may only want to drink tea from the cups and others will want to access the full story. People can go as far as they want.”
Roozenburg is also looking for a manufacturing partner in China, where some of the originals she first scanned originated from.Some of the Netherlands’ major institutions have already contacted Roozenburg to discuss how her ideas could work for them.
"Museums should be acting more like warehouses of objects and a point from which knowledge can spread," she said. "There are so many more ways to share information than just selling tickets to visitors."
Roozenburg is also looking for a manufacturing partner in China, where some of the originals she first scanned originated from.
Sign up for a daily roundup
of all our stories | <urn:uuid:eab02cc3-f529-4289-914e-b457ef48570c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.dezeen.com/2014/07/06/maaike-roozenburg-smartreplicas-3d-scanning-printing-augmented-reality/",
"date": "2016-09-26T22:42:47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9763213396072388,
"token_count": 903,
"score": 2.609375,
"int_score": 3
} |
"group of organisms evolved from a common ancestor," 1957, from Greek klados "young branch, offshoot of a plant, shoot broken off," from PIE *kele-, possibly from root *kel- "to strike, cut" (see holt).
A grouping of organisms made on the basis of phylogenetic relationship, rather than purely on shared features. Clades consist of a common ancestor and all its descendants. The class Aves (birds) is a clade, whereas the class Reptilia (reptiles) is not, since it does not include birds, which are descended from the dinosaurs, a kind of reptile. Many modern taxonomists prefer to use clades in classification, and not all clades correspond to traditional groups like classes, orders, and phyla. Compare grade. | <urn:uuid:2bc2401e-e961-468b-b595-a89bcd9033f4> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.dictionary.com/browse/clade?qsrc=2446",
"date": "2016-09-26T23:05:29",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9303795099258423,
"token_count": 167,
"score": 3.828125,
"int_score": 4
} |
The idea of using eco-friendly technology is something the Building Industry (compared to many other industries) was slow to adopt. Having adopted it however, many companies are now leading the way in the new "Green" technologies.
Quite a number of the large, painting and decorating supply companies have embraced these new technologies which prove that environmentally friendly materials do not need to cost the earth to help save it!
The problem:Over 35 million paintbrushes are discarded every single year. That is over 95,000 every day in the United Kingdom alone! Most of these brushes do not lend themselves to a non toxic, rapid disappearance and lead to over 45 tonnes of eco damaging material taking up to 450 years to degrade. Hopefully this will make you think twice next time you buy a brush!
The solution:As the Industry looks for more sustainable products to use many companies have introduced a range of paint brushes of which the handles and other parts of the item of which are made from 65% cornstarch and other environmentally friendly materials. As corn is a natural and sustainable substance, the brushes will bio degrade in a considerable shorter period that the Polypropylene handles of many other products. Some of the bristles are made from a mixture of man made filaments and natural fibres facilitating a brush which is not only eco friendly but gives a soft feel to any paintwork.
Many companies have even ensured that their packaging is also biodegradable and environmentally sustainable. The wrapping is made from recycled or biodegradable material while the chemical process usually employed in printing methods has been removed completely. The inks are vegetable or soy based. | <urn:uuid:f067e217-3fcf-4197-9540-4511f97bbc0a> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.diydoctor.org.uk/projects/biodegradable_paint_brushes.htm",
"date": "2016-09-26T22:28:12",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9632365703582764,
"token_count": 331,
"score": 2.984375,
"int_score": 3
} |
The African Union was established in March 2001, and in July 2002 was formally launched as the successor to the African Economic treaty and the Organization of African Unity (OAU). The objectives of the AU are to promote economic integration and social development, to prepare the way for political unification of the whole continent.
Related categories 6
International organization founded as the Organization of African Unity to promote cooperation among the independent nations of Africa. Listing and profiles of member countries, speeches and communications transcripts, news, calendar, and discussion forum. [English and French]
African Union Summit 2002
Official web site of the OAU/AU Summit in Durban, South Africa. Includes the key and background documents, text of speeches and related links.
Constitutive Act of the African Union
Statement of intention of the leaders of the African nations to form a continental union, 11 July 2000.
Delegation of the European Union to the African Union. Addis Abeba, Ethiopia.
Pan-African Parliament (PAP)
Providing a common platform for African peoples and their grass-roots organizations to be more involved in discussions and decision-making on the problems and challenges facing the continent.
Wikipedia - African Union
Encyclopedia entry offers an overview, discussion of current issues, origins and history, member countries, organisation, economic status, symbols and references.
Other languages 1
Last update:May 15, 2014 at 6:35:10 UTC | <urn:uuid:aa305e43-af8e-4f61-a048-9169274bb8ec> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.dmoz.org/Society/Government/Multilateral/Regional/African_Union/",
"date": "2016-09-26T22:59:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9030717611312866,
"token_count": 293,
"score": 2.921875,
"int_score": 3
} |
"In summary, we have ignored the earth’s environmental stop signs." –Lester R. Brown, Full Planet, Empty Plates
Chapter 2. Beyond the Oil Peak: The Oil Intensity of Food
Modern agriculture depends heavily on the use of gasoline and diesel fuel in tractors for plowing, planting, cultivating, and harvesting. Irrigation pumps use diesel fuel, natural gas, and coal-fired electricity. Fertilizer production is also energy-intensive: the mining, manufacture, and international transport of phosphates and potash all depend on oil. Natural gas, however, is used to synthesize the basic ammonia building block in nitrogen fertilizers. 16
In the United States, for which reliable historical data are available, the combined use of gasoline and diesel fuel in agriculture has fallen from its historical high of 7.7 billion gallons in 1973 to 4.6 billion in 2002, a decline of 40 percent. For a broad sense of the fuel efficiency trend in U.S. agriculture, the gallons of fuel used per ton of grain produced dropped from 33 in 1973 to 13 in 2002, an impressive decrease of 59 percent. 17
One reason for this was a shift to minimum and no-till cultural practices on roughly two fifths of U.S. cropland. No-till cultural practices are now used on roughly 95 million hectares worldwide, nearly all of them concentrated in the United States, Brazil, Argentina, and Canada. The United States—with 25 million hectares of minimum or no-till—leads the field, closely followed by Brazil. 18
While U.S. agricultural use of gasoline and diesel has been declining, in many developing countries it is rising as the shift from draft animals to tractors continues. A generation ago, for example, cropland in China was tilled largely by animals. Today much of the plowing is done with tractors. 19
Fertilizer accounts for 20 percent of U.S. farm energy use. Worldwide, the figure may be slightly higher. On average, the world produces 13 tons of grain for each ton of fertilizer used. But this varies widely among countries. For example, in China a ton of fertilizer yields 9 tons of grain, in India it yields 11 tons, and in the United States, 18 tons. 20
U.S. fertilizer efficiency is high because U.S. farmers routinely test their soils to precisely determine crop nutrient needs and because the United States is also the leading producer of soybeans, a leguminous crop that fixes nitrogen in the soil. Soybeans, which rival corn for area planted in the United States, are commonly grown in rotation with corn and, to a lesser degree, with winter wheat. Since corn has a voracious appetite for nitrogen, alternating corn and soybeans in a two-year rotation substantially reduces the nitrogen fertilizer needed for the corn. 21
Urbanization increases demand for fertilizer. As rural people migrate to cities, it becomes more difficult to recycle the nutrients in human waste back into the soil. Beyond this, the growing international food trade can separate producer and consumer by thousands of miles, further disrupting the nutrient cycle. The United States, for example, exports some 80 million tons of grain per year—grain that contains large quantities of basic plant nutrients: nitrogen, phosphorus, and potassium. The ongoing export of these nutrients would slowly drain the inherent fertility from U.S. cropland if the nutrients were not replaced in chemical form. 22
Factory farms, like cities, tend to separate producer and consumer, making it difficult to recycle nutrients. Indeed, the nutrients in animal waste that are an asset to farmers become a liability in large feeding operations, often with costly disposal. As oil, and thus fertilizer, become more costly, the economics of factory farms may become less attractive.
Irrigation, another major energy claimant, is taking more and more energy worldwide. In the United States, close to 19 percent of agricultural energy use is for pumping water. In the other two large food producers—China and India—the number is undoubtedly much higher, since irrigation figures so prominently in both countries. 23
Since 1950 the world’s irrigated area has tripled, climbing from 94 million hectares to 277 million hectares in 2002. In addition, the shift from large dams with gravity-fed canal systems that dominated the last century’s third quarter to drilled wells that tap underground water resources has also boosted irrigation fuel use. 24
Some trends, such as the shift to no tillage, are making agriculture less oil-intensive. But rising fertilizer use, the spread of farm mechanization, and falling water tables are making food production more oil-dependent. This helps explain why farmers are becoming involved in the production of biofuels, both ethanol to replace gasoline and biodiesel to replace diesel. (Renewed interest in these fuels is discussed later in this chapter.)
Although attention commonly focuses on energy use on the farm, this accounts for only one fifth of total food system energy use in the United States. Transport, processing, packaging, marketing, and kitchen preparation of food account for nearly four fifths of food system energy use. Indeed, my colleague Danielle Murray notes that the U.S. food economy uses as much energy as France does in its entire economy. 25
The 14 percent of energy used in the food system to move goods from farmer to consumer is roughly equal to two thirds of the energy used to produce the food. And an estimated 16 percent of food system energy use is devoted to processing—canning, freezing, and drying food—everything from frozen orange juice concentrate to canned peas. 26
Food staples, such as wheat, have traditionally moved over long distances by ship, traveling from the United States to Europe, for example. What is new is the shipment of fresh fruits and vegetables over vast distances by air. Few economic activities are more energy-intensive. 27
Food miles—the distance food travels from producer to consumer—have risen with cheap oil. Among the longest hauls are the flights during the northern hemisphere winter that carry fresh produce, such as blueberries from New Zealand to the United Kingdom. At my local supermarket in downtown Washington, D.C., the fresh grapes in winter typically come by plane from Chile, traveling almost 5,000 miles. Occasionally they come from South Africa, in which case the distance from grape arbor to dining room table is 8,000 miles, nearly a third of the way around the earth. 28
One of the most routine long-distance movements of fresh produce is from California to the heavily populated U.S. East Coast. Most of this produce moves by refrigerated trucks. In assessing the future of long-distance produce transport, one oil analyst observed that the days of the 3,000-mile Caesar salad may be numbered. 29
Packaging is also surprisingly energy-intensive, accounting for 7 percent of food system energy use. It is not uncommon for the energy invested in packaging to exceed that of the food it contains. And worse, nearly all the packaging in a modern supermarket is designed to be discarded after one use. 30
The most energy-intensive segment of the food chain is the kitchen. Much more energy is used to refrigerate and prepare food in the home than is used to produce it in the first place. The big energy user in the food system is the kitchen refrigerator, not the farm tractor. 31
While the use of oil dominates the production end of the food system, electricity (usually produced from coal or gas) dominates the consumption end. The oil-intensive modern food system that evolved when oil was cheap will not survive as it is now structured with higher energy prices. Among the principal adjustments will be more local food production and movement down the food chain as consumers react to rising food prices by buying fewer high-cost livestock products.
16. Danielle Murray, “Oil and Food: A Rising Security Challenge,” Eco-Economy Update (Washington, DC: Earth Policy Institute, 9 May 2005), p. 2 and data charts; irrigation data sources include U.S.
Department of Agriculture (USDA), “Chapter 5: Energy Use in Agriculture,” U.S. Agriculture and Forestry Greenhouse Gas Inventory: 1990–2001, Technical Bulletin No. 1907 (Washington, DC: Global Change Program Office, Office of the Chief Economist, 2004), p. 94.
17. James Duffield, USDA, e-mail to Danielle Murray, Earth Policy Institute, 31 March 2005; USDA, Production, Supply & Distribution, electronic database, at www.fas.usda.gov/psd, updated 13 September 2005.
18. Conservation Technology Information Center (CTIC), “Conservation Tillage and Other Tillage Types in the United States—1990–2004,” 2004 National Crop Residue Management Survey (West Lafayette,
IN: Purdue University, 2004); CTIC, “Top Ten Benefits of Conservation Tillage,” at www.ctic.purdue.edu/Core4/CT/CTSurvey/10Benefits.html, viewed 27 July 2005; Rolf Derpsch, “Extent of No-Tillage Adoption Worldwide,” to be presented at the III World Congress on Conservation Agriculture, Nairobi, Kenya, 3–7 October 2005, e-mail to Danielle Murray, Earth Policy Institute, 9 August 2005.
19. Duffield, op. cit. note 17; tractor use and horse stocks from U.N. Food and Agriculture Organization (FAO), FAOSTAT Statistics Database, at apps.fao.org, updated 4 April 2005.
20. Fertilizer energy use data from Duffield, op. cit. note 17; DOE, EIA, Annual Energy Outlook 2003 (Washington, DC: 2004); John Miranowski, “Energy Demand and Capacity to Adjust in U.S.
Agricultural Production,” presentation at Agricultural Outlook Forum 2005, Arlington, VA, 24 February 2005; fertilizer-to-grain ratios from USDA, op. cit. note 17; Patrick Heffer, Short Term Prospects for World Agriculture and Fertilizer Demand 2003/04–2004/05 (Paris: International Fertilizer Industry Association (IFA), 2005); IFA Secretariat and IFA Fertilizer Demand Working Group, Fertilizer Consumption Report (Brussels: 2001).
21. U.S. grain production data from USDA, op. cit. note 17.
22. Brian Halweil, Eat Here (New York: W.W. Norton & Company, 2004), p. 29; USDA, op. cit. note 17.
23. Compiled by Earth Policy Institute from Duffield, op. cit. note 17; DOE, EIA, op. cit. note 20; USDA, National Agricultural Statistics Service, “Table 20: Energy Expenses for On-Farm Pumping of Irrigation Water by Water Source and Type of Energy: 2003 and 1998,” 2003 Farm & Ranch Irrigation Survey, Census of Agriculture (Washington, DC: 2004); irrigation and land use data from FAO, op. cit. note 19.
24. Data for 1950 from Sandra Postel, “Water for Food Production: Will There Be Enough in 2025?” BioScience, August 1998; irrigation and land use data from FAO, op. cit. note 19; Mark Rosengrant, Ximing Cai, and Sarah Cline, World Water and Food to 2025: Dealing with Scarcity (Washington, DC, and Battaramulla, Sri Lanka: International Food Policy Research Institute and International Water Management Institute, 2002), p. 155.
25. Murray, op. cit. note 16.
26. Ibid., p. 3; M. Heller and G. Keoleian, Life-Cycle Based Sustainability Indicators for Assessment of the U.S. Food System (Ann Arbor, MI: Center for Sustainable Systems, University of Michigan, 2000), p. 42.
27. Halweil, op. cit. note 22, p. 37; Stacy Davis and Susan Diegel, “Chapter 2: Energy,” Transportation Energy Data Book: 24th Edition (Washington, DC: DOE, Energy Efficiency and Renewable Energy, 2004), pp. 2–17; DOE, EIA, “Chapter 5: Transportation Sector,” Measuring Energy Efficiency in the United States Economy: A Beginning (Washington, DC: 1995), p. 31; U.S. Department of Transportation, Bureau of Transportation Statistics (BTS), Freight Shipments in America (Washington, DC: 2004), pp. 9–10; Andy Jones, Eating Oil—Food in a Changing Climate (London: Sustain and Elm Farm Research Centre, 2001), p. 2 of summary.
28. Jones, op. cit. note 27, pp. 1–2 of summary; Charlie Pye-Smith, “The Long Haul,” Race to the Top Web site, www.racetothetop.org/ case/case4.htm (London: International Institute for Environment and Development, 25 July 2002).
29. BTS and U.S. Census Bureau, “Table 14. Shipment Characteristics by Three-Digit Commodity and Mode of Transportation: 2002,” 2002 Commodity Flow Survey (Washington, DC: December 2004); Jones, op. cit. note 27; James Howard Kunstler, author of Geography of Nowhere, in The End of Suburbia: Oil Depletion and the Collapse of The American Dream, documentary film (Toronto, ON: The Electric Wallpaper Co., 2004).
30. Heller and Keoleian, op. cit. note 26, p. 42; food energy content and packaging content calculated by Danielle Murray, Earth Policy Institute, using USDA nutritional information and packaging energy costs from David Pimentel and Marcia Pimentel, Food, Energy and Society (Boulder, CO: University Press of Colorado, 1996), cited in Manuel Fuentes, “Alternative Energy Report,” Oxford Brookes University and the Millennium Debate, 1997; Leo Horrigan, Robert S. Lawrence, and Polly Walker, “How Sustainable Agriculture Can Address the Environmental and Human Health Harms of Industrial Agriculture,” Environmental Health Perspectives, vol. 110, no. 5 (May 2002), p. 448.
31. Murray, op. cit. note 16, pp. 1, 3; Duffield, op. cit. note 17; DOE, EIA, op. cit. note 20; USDA, op. cit. note 23; Miranowski, op. cit. note 20, p. 11.
Copyright © 2006 Earth Policy Institute | <urn:uuid:9d59489b-28fd-4081-8640-bdfb97cdb9cc> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.earthpolicy.org/books/pb2/pb2ch2_ss3",
"date": "2016-09-26T22:26:18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8925228714942932,
"token_count": 3053,
"score": 3.390625,
"int_score": 3
} |
Note: Supplemental materials are not guaranteed with Rental or Used book purchases.
What is included with this book?
The authoritative resource for criminological theory.
Criminological Theory, 6/e provides concise chronological coverage of all the major criminological theories. The text puts theories into socio-historical context to illustrate how and why certain theories evolved, why they were popular at particular points in time, and how they are still active and influential today. The authors also examine the research and policies that were inspired by each theory. Specifically designed to suit one-semester courses, students and instructors alike will appreciate the text’s straight-forward approach, clear language, and comprehensive coverage.
Chapter 1 Introduction
SECTION I: THE ROOTS OF CRIMINOLOGY
Chapter 2 The Classical School
Chapter 3 The Positive School
SECTION II: THE FOUNDATIONS OF AMERICAN CRIMINOLOGY
Chapter 4 The Chicago School
Chapter 5 Differential Association Theory
Chapter 6 Anomie Theory
SECTION III: BUILDING ON THE FOUNDATION
Chapter 7 Subculture Theories
Chapter 8 Labeling Theory
SECTION IV: MODERN CRIMINOLOGY
Chapter 9 Conflict Theory
Chapter 10 Gender-based Theories
Chapter 11 Social Control Theory
Chapter 12 Social Learning Theory
Chapter 13 Rational Theories
SECTION V: CONTEMPORARY PERSPECTIVES
Chapter 14 Contemporary Theories of Process
Chapter 15 Contemporary Integrative and Critical Theories | <urn:uuid:c44e1ab7-73ac-4c18-9136-0e4e307f7245> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ecampus.com/criminological-theory-6th-williams/bk/9780132987028",
"date": "2016-09-26T22:37:47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8403648734092712,
"token_count": 308,
"score": 2.71875,
"int_score": 3
} |
Norway is creating a police unit to investigate cruelty to animals, arguing that those who hurt animals often harm people too.
‘First of all, it’s important to take care of our animals, so that they enjoy the rights they have and that there be a follow-up when their rights are violated,’ Agriculture Minister Sylvi Listhaug said on Monday, describing animals at risk as ‘often defenceless’.
The initiative ‘can also help fight crime and attacks against people, since studies show that some of those people who commit crimes and misdemeanours against animals also do the same to people’, Listhaug said.
The initiative will be tested out for three years.
Police in the western county of Sor-Trondelag will appoint three people – an investigator, a legal expert and a co-ordinator – to fight animal abuse.
Animal rights groups hailed the initiative.
‘The process of taking animal abuse seriously has begun,’ said Siri Martinsen of the animal rights’ group Noah.
In 2014, 38 cases of animal abuse were reported to police in Norway, according to public radio and television NRK.
Under Norwegian law, acts of animal abuse carry a maximum sentence of three years in prison.
Similar animal rights police units operate in the Netherlands and in Sweden. | <urn:uuid:695be2ea-fa41-453b-9639-6ceb667de96a> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.echo.net.au/2015/04/norway-tests-out-animal-rights-cops/",
"date": "2016-09-26T22:26:15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9620500206947327,
"token_count": 279,
"score": 2.65625,
"int_score": 3
} |
This article contributes to emerging understandings of the role of law in promoting climate change adaptation (Craig 2010, McDonald 2011) by focusing specifically on how conservation objectives are articulated in nature conservation, protected area, and threatened species laws and policies. The conservation of biodiversity under climate change is the subject of extensive scholarly inquiry across the disciplines of conservation biology (e.g., Hoegh-Guldberg and Bruno 2010, Bellard et al. 2012), environmental planning (e.g., Gillson et al. 2013, Rickards et al. 2014), and public policy (e.g., Chaffin et al. 2014, Koontz et al. 2015). Conservation practitioners and researchers have identified several key strategies for promoting climate-adaptive conservation practices, including enhancing protected areas and connectivity, reducing or removing existing stressors, and in some cases, relocation and ex situ conservation measures (Seabrook et al. 2011, Hagerman and Satterfield 2014). These studies have not necessarily questioned the overarching societal goals of conservation practice under climate change, and the limited literature that has done so (e.g., Heller and Hobbs 2014, Harris et al. 2015) has not considered how such goals should be operationalized through our laws and policies.
Conservation objectives in law perform a range of functions and are crucial to how action for biodiversity conservation is developed, funded, implemented, and assessed (McCormack and McDonald 2014). They clarify the broad aspirational outcomes sought by decision makers in crafting and implementing conservation policies and strategies and guide agency priorities and resource allocation. Provided they are drafted with sufficient specificity, legal objectives set a reference point to test the effectiveness of conservation activities. In cases of ambiguity or uncertainty, they also inform judicial interpretation of substantive legal obligations for conservation.
We examine how conservation objectives are articulated in the legal regime for nature conservation in Australia’s island state of Tasmania in the context of the conservation challenges facing that jurisdiction. Tasmania is a good case study as it requires consideration of objectives derived from international, national, and state-level laws. In articulating the case for reform, we identify problems with the existing approach and interpret the findings for contemporary conservation law practice more generally. We consider the ways in which current objectives tend to promote conservation management approaches that may be unachievable under future climate change. In proposing a path for operationalizing reform options, the barriers to and drivers of future reform are considered, as well as the potential cobenefits of pursuing a new approach. We conclude that altering our conservation aspirations is a critical first step in making our legal regime more climate adaptive, but that deeper reform of legal instruments, tools, and agency mandates will also be needed to ensure that those objectives are embedded in conservation management practice.
This paper is a result of an extensive literature review, preparation of a discussion paper, a workshop, and the project team’s regular discussions over a 12-month period. As such, it is an exploratory paper outlining perceived limitations in the legal specification of conservation objectives and opportunities to address these, rather than an empirical study. We limit our scope to those objectives directed toward conserving terrestrial biodiversity in Tasmania.
To explore whether there was support for reform of objectives in Tasmania, we invited conservation practitioners to a workshop to discuss the issues raised in a discussion paper outlining current research on law reform under climate change. Our aims for the workshop were: to draw out how current conservation objectives in law influenced the choice of strategies and actions; to find out whether there was general support for reform, and what form it might take; and to identify what processes, barriers, tools, and other issues would be important to consider. Nine local practitioners involved in conservation and related areas of interest (planning, government, environment, public advocacy groups) attended the workshop: four from government planning and environmental departments, two from advocacy groups, and one each from local government, academia, and an environmental law nongovernment organization.
The workshop was small to allow time for in-depth discussion and interaction among participants. The project team facilitated specific activities designed to stimulate discussion in both roundtable format and breakout groups, but otherwise limited their speaking to allow the focus to remain on participants’ perspectives. The workshop was audio recorded, and pertinent quotations transcribed. Participants also completed a worksheet of barriers and enablers, and these were collated. The transcripts and worksheets, together with findings from the literature, workshop, and authors’ experiences, provide evidence in support of the ideas and conclusions presented in the sections of this paper titled “The Case for Reform,” “A Systems Approach to Conservation Objectives in Law,” and “Pathways to Reform.”
In the next section, we present a review of the key legislation and plans associated with Tasmanian biodiversity objectives. We then identify key issues based on our review of the literature and doctrinal analysis (Hutchinson and Duncan 2012). We explore potential options for reform and suggest pathways by which these reforms could be progressed. We present integrated findings from the literature, workshop, and authors’ experiences.
Conservation objectives may be explicit or implicit. Explicit conservation objectives include clauses in legislation or statutory management plans and legislative directions to decision makers. For example, the objectives of Australia’s national environmental law, the Environment Protection and Biodiversity Conservation Act 1999 (Cth) (EPBC Act) include to “provide for the protection of the environment” and to “promote the conservation of biodiversity” (s3(1)). To achieve its objectives, the EPBC Act states that it must:
...[enhance] Australia’s capacity to ensure the conservation of its biodiversity by including provisions to: ... (i) protect native species (and in particular prevent the extinction, and promote the recovery, of threatened species) and ensure the conservation of migratory species.
Objectives can also be found in statutory directions to decision makers, for example where legislation requires that “the Minister must take into account” or “have regard to” a particular matter in making a decision. Some laws impose an obligation to pursue legislative objectives in public activities more generally. For example, the Threatened Species Protection Act 1995 (Tasmania) (TSPA) states that:
It is the obligation of any person on whom a function is imposed, or a power is conferred, under this Act to perform the function or to exercise the power in such a manner as to further the objectives specified in Schedule 1 [s4, emphasis added].
Implicit objectives can be discerned by considering how a law seeks to achieve its explicit objects—the legal tools and instruments used. In Australia, the primary legal mechanisms for achieving explicit objectives focus on protected areas and listed threatened species. As will be shown below, this structuring of legal mechanisms implies that Australian conservation law prioritizes rare native species and considers wilderness places as being of higher conservation value than other elements of biodiversity, including diverse genes and ecosystems. The biophysical context for conservation of Tasmania’s biodiversity and the interplay of explicit and implicit objectives are explored in the following subsections.
Tasmania is a cool, temperate island located 240 km to the south of the Australian mainland. It is separated by Bass Strait, which is approximately 350 km wide. As a result of the island’s long geographic isolation and topographic diversity, there are a large variety of habitats and high numbers of endemic flora and fauna species. For example, 28% of known native vascular plant species are endemic. Tasmania has two World Heritage Areas with outstanding universal natural values, comprising 23% of the land area of the state: Tasmanian Wilderness and Macquarie Island. Overall, half of the state is managed for conservation within public (48.7%) or private (1.4%) reserves.
Tasmania has warmed at a rate of 0.1°C per decade since the 1950s, and average temperatures are projected to increase by 2.6–3.3°C by the end of the century under a high emissions scenario. This rate is lower than that observed across mainland Australia (0.16°C per decade) and the globe (0.12 °C per decade) (Intergovernmental Panel on Climate Change (IPCC) 2014), because of Tasmania’s maritime climate and southerly location.
As the climate changes and species on mainland Australia start to shift south to track cooler conditions, Tasmania is being heralded as a potential refuge for many species (e.g., Garnett and Zander 2014). However, for cool temperate and alpine species adapted to Tasmanian conditions, there are no options for range shifting because Tasmania is the southernmost land mass between the mainland and Antarctica.
The challenges facing conservation under climate change in Tasmania are illustrated by research into the changing climate suitability for the Ptunarra brown butterfly (Oreixenica ptunarra) and the Tasmanian lowland temperate native grasslands. The Ptunarra brown butterfly is listed as vulnerable under the Tasmanian TSPA. Tasmanian lowland temperate native grasslands are listed as a “Critically Endangered Ecological Community” under the EPBC Act. Less than 10% of the natural extent of this community remains, mostly on private freehold land. Remnant patches in good condition are species rich and important habitat to a diverse array of flora and fauna (Harris et al. 2015), many of which are also listed as vulnerable or threatened.
Recent research suggests that the climatic suitability for the listed lowland native grasslands may contract under climate change and that the rate of this change is rapid (Harris et al. 2015). As the climate becomes less suitable, a gradual change is expected in species composition, structure, and habitat quality of the grassland communities. Attempting to conserve the current composition of the grassland communities may not be possible, and new benchmarks will be needed to judge management success. In such cases, conservation biologists are increasingly recommending that management should focus on maintaining diversity, structure, and function, rather than attempting to preserve current species composition (Dunlop et al. 2013, Heller and Hobbs 2014).
There is considerable uncertainty associated with all projections of future change. Projections for the Ptunarra brown butterfly varied widely depending on how the models were parameterized. The species was projected either to experience very little contraction of habitat or to come close to extinction by the end of the century due to lack of suitable climate (Harris et al. 2013). Therefore, it is not possible to predict exactly what changes will occur in response to the changing climate or the location or timing of such change.
Current legal approaches to conservation are not designed to cope with such uncertainty or with changing boundaries or composition of community types. Legal mechanisms for protecting grasslands include formal reservations with fixed boundaries recorded on property titles and long-term conservation covenants over private land. Management success is judged through indicators such as the abundance of listed threatened species and floristic composition. Similarly, one of the specific objectives of the recovery plan for the Ptunarra brown butterfly is to “ensure the species persists long-term throughout its area of occupancy” (Bell 1998: 3), yet attempts to maintain a static baseline may no longer be viable under a changing climate.
The legal framework for conservation in Tasmania is a nested hierarchy of instruments from international, national, state, and local levels. International agreements, particularly the Convention on Biological Diversity (CBD) and the Convention Concerning the Protection of the World Cultural and Natural Heritage (World Heritage Convention), set high-level objectives. The CBD legitimizes the emphasis given to species conservation in national and state legislation. The World Heritage Convention supports national legislation that protects world heritage areas from development that would significantly affect world heritage values.
Consistent with the approach across Australian states, Tasmania’s conservation laws and management arrangements take a two-tiered approach. Primary emphasis is on reservation of large areas of public land primarily for conservation purposes. The second tier involves the listing and protection of threatened species, typically by requiring consideration or balancing of the impacts of specific development through environmental impact assessment frameworks. Conservation legislation also supports proactive management activities, such as the preparation and implementation of species recovery plans, but resource constraints greatly limit the effectiveness of such measures.
Protected area law is primarily a matter for the states, but with Tasmania having both world heritage areas and listed species under the EPBC Act, the Australian Government also has a role in securing the conservation of the state’s biodiversity. Australia’s Strategy for the National Reserve System 2009–2030 provides broad guidance for that management, recognizing that the primary means of securing long-term protection for Australia’s terrestrial biodiversity is a “comprehensive, adequate and representative” national system of protected areas.
Tasmanian legislation for establishing protected areas sets out broad objectives. Schedule 1 of the Nature Conservation Act 2002 (Tasmania) (NCA) establishes the classes of protected area into which land may be reserved and lists the values and purposes of reservation for each class. Management objectives for each class are set out in Schedule 1 of the National Parks and Reserves Management Act 2002 (Tasmania) (NPRMA). The relevant objectives for management of national parks, for example, are:
(a) to conserve natural biological diversity; ...
(d) to conserve sites or areas of cultural significance; ...
(g) to protect the national park against, and rehabilitate the national park following, adverse impacts such as those of fire, introduced species, diseases and soil erosion on the national park’s natural and cultural values and on assets within and adjacent to the national park;
(h) to encourage and provide for tourism, recreational use and enjoyment consistent with the conservation of the national park’s natural and cultural values; ...
(j) to preserve the natural, primitive, and remote character of wilderness areas.
Site-specific objectives may also be set in statutory management plans developed by reference to the objectives for that class of reservation. For example, the Tasman National Park and Reserves Management Plan specifies objectives under thematic headings such as:
The management plan then specifies “policies” and “actions” through which these objectives are to be achieved. However, the meanings of “protect” and “maintain” are undefined, and the difference between the two is unclear. “Maintain,” for example, could mean ensuring all species currently present in the park continue to be extant into the future, and/or that their current populations are sustained, and/or that their distributions remain as at present.
The objectives of Tasmania’s TSPA include:
“Survival” is defined as “the continued existence of viable populations of a taxon in the wild,” so the objective is to enable all species to remain in “an independent, unpossessed or natural state” (TSPA s3). In practice, these objectives can only be operationalized through substantive protections under the Act for listed threatened species. Listing processes may not cover all taxa equally because of unconscious priorities, bias in the nomination process, or for political or practical reasons, including the availability of information and community awareness. Listed species may fall into one of a number of categories, including endangered, vulnerable, or rare. As a result, certain rare and popular or iconic species are more likely to be the subject of the TSPA’s protection than other less well-known or identifiable species.
No legislation explicitly identifies whether the main purpose of species protection is to prevent extinction, avoid new species being added to statutory threatened species lists, or reduce the threats to already listed species. The Tasmanian legislation appears to seek all three, but does not prioritize or provide direction on how this might be achieved, other than to impose penalties for “taking” listed species, and to establish mechanisms for threat abatement and recovery plans (TSPA ss51, 25, 27).
A more forward-looking and adaptive conservation ambition is articulated in Tasmania’s Natural Heritage Strategy, the aim of which is to:
improve conservation outcomes in Tasmania by taking a coordinated, strategic landscape approach to conservation and management, including strategic planning and assessment. (Department of Primary Industries, Water and Environment (DPIPWE) 2013)
The objectives articulated in Tasmanian protected area and threatened species laws are underpinned by the goals of a Resource Management and Planning System (RMPS), which are set out in annexes to key land-use planning and conservation statutes. The RMPS objectives include “to promote the sustainable development of natural and physical resources and the maintenance of ecological processes and genetic diversity.” Statutory functions under protected area and threatened species legislation must be exercised “having regard to” or “to further” the RMPS objectives, but there is no guidance about the relative priority to be given to conservation over other listed social, economic, and cultural objectives.
Three key themes emerge from the overview of Tasmanian conservation objectives set out in the previous section: (i) an emphasis on maintaining the current status and location of ecosystems and their constituent parts, or returning them to an “undisturbed” state; (ii) a high value placed on rarity, nativeness, and wildness; and (iii) focus on specific parcels of reserved land as the sites for most conservation effort. These themes have important implications for conservation law and policy, particularly given the expected influence of climate change on the state’s biodiversity.
The likelihood of range shifts under climate change means that areas need to be identified and managed for both present and future habitat. In some cases, this means ensuring that areas can be made available for conservation in the future, whereas other areas may require active restoration. Enhancing connectivity between suitable areas is also recognized as a critical feature of climate adaptive approaches. Yet the emphasis of current objectives tends to view the conservation estate and associated systems as static. For example, the definition of “habitat” in the TSCA refers only to the habitat currently occupied by a listed taxon, which limits the impetus for restoration of degraded areas or active management of areas for future habitat. Similarly, reference to preserving the “natural state” of protected areas suggests a static or specific baseline. The TSPA’s reference to maintaining the “evolutionary potential of species in the wild” acknowledges change processes, but arguably only at the pace and over timescales that have been experienced historically.
Current objectives place a high value on rarity, which is in turn a key criterion for the intensive application of conservation effort and resources in practice. Substantive legal protection and practical conservation effort are prioritized to direct limited conservation resources toward protecting those species closest to being lost to extinction. Threatened species regimes like the TSPA and EPBC Act place species into categories of threat, from “critically endangered” and “endangered” to “vulnerable” or “rare,” each defined in relation to their proximity to extinction. The most threatened species are most likely to be listed, receive prioritized funding, and benefit from recovery planning and threat abatement efforts. As one workshop participant commented:
The Environment Protection and Biodiversity Conservation Act and most of the state Acts have an overwhelming emphasis on species-level conservation and rarity, which to me, on reflection, seems quite bizarre, given all we know now.
In the past, individual species have been valued as surrogates for more general concepts of biodiversity (Dunlop et al. 2013). Yet the most interactive species, including those that are common, may be far more important to ongoing ecosystem function and, in the face of rapid decline, their loss may be more likely to cause ecosystem transformation. It may be necessary to value biodiversity, and particularly rare species, in a different way so that their function becomes more important than their population size. Indeed, the strong emphasis on avoiding extinction in the wild may not only be increasingly hard to achieve, it may actually prove undesirable in terms of broader ecosystem health and resilience, especially if areas are managed to provide critical habitat for single species (Steffen et al. 2009, Camacho et al. 2010). This is not to suggest that we advocate giving up on the conservation of rare species or accept that the extinction of any species is acceptable. However, in some cases, the reality of climate change may force a reorientation of the current strong emphasis on protection in the wild.
The Tasmanian conservation objectives outlined above place a high value on native and indigenous species over other species. There are 43 references to “native” species in the TSPA. Some definitions of native species are limited to a very particular subset of Australia’s biodiversity (McCormack and McDonald 2014). Only species “naturally occurring in Tasmania” may be listed as threatened under the TSPA, and thus qualify for substantive protection (TSPA ss25, 27, 32, 51). The EPBC Act has one of the broadest definitions of “native species,” but still limits the term to a species that:
is indigenous to Australia or an external Territory (or its seabed or coastal sea); members of which periodically or occasionally visit; or that was present in Australia or an external Territory before 1400 (s528, emphasis added).
State legislation is often more restrictive, defining native species as “indigenous” or as “continuous residents,” including “periodic visitors,” but does not define those additional terms. Although there is certainly a role for legal objectives that emphasize the conservation of native species, the implicit priority that is currently given to native species at the expense of a more flexible and functional approach should be reconsidered to avoid limiting options for adaptive conservation through the law.
The prominence of the Tasmanian Wilderness World Heritage Area in the Tasmanian framework places explicit value on the importance of nature in an undisturbed state. This is also the case in the threatened species law: the first objective of the TSPA is to ensure that “native flora and fauna in Tasmania can survive, flourish and retain their potential for evolutionary development in the wild” (Sch 1, cl3, emphasis added). “Wild”is defined as “an independent, unpossessed or natural state and not in an intentionally cultivated, domesticated or captive state regardless of the location or land tenure” (TSPA s3).
International, national, and state law all emphasize the importance of in situ conservation. However, with the rapidity of climate change, coupled with habitat fragmentation, likely to undermine the capacity of some species to independently evolve and/or adapt, human intervention in the form of restoring large-scale ecological connectivity and/or assisted colonization is likely to become increasingly necessary (Braverman 2014). Terms such as “wild” and “natural” are therefore increasingly unhelpful in directing conservation outcomes under anthropogenic climate change (Pritchard and Harrop 2010). This is not to say that protecting areas of “wilderness” is not valuable. Rather, it may become necessary to adjust our understanding of “the wild” by accepting a higher level of human influence in the form of active conservation management, in order for some species to persist in those places. In this regard, Meine’s (2015: 91) “relative wild: the degrees of wildness and human influence in any place, and the ever-changing nature of the relationship between them over time” may provide a useful terminology.
Conservation law has developed with an emphasis on the reservation of parcels of public land for conservation purposes. Even if private reserves are included, the emphasis on reserves with well-defined spatial boundaries implies that natural resources outside those boundaries remain available for exploitation (Preston 2013, McCormack and McDonald 2014, Fitzsimons 2015). This implies that conservation should primarily take place within defined areas and reinforces a distinction between “wild” protected places and places that are “tamed” for human use. This distinction may undermine the potential for integrated, landscape-scale conservation planning across tenures that recognizes the importance of connectivity, refugia, and healthy landscapes surrounding protected areas (Laurance et al. 2012).
Much of the climate adaptive conservation literature advocates a shift toward systems and landscape approaches to conservation, rather than protection of specific species. Recognizing the social and cultural importance of retaining populations of iconic species in the wild, we do not advocate a wholesale replacement of species-level conservation. Even if we did, the difficulties of system-wide approaches should not be understated. The barriers to implementing new objectives are considered below, but a key reason that current approaches perform poorly is because they have been inadequately funded, and there is little evidence to suggest that a new set of objectives would come with enhanced resources for implementation. Furthermore, legal structures are rarely the product of intentional institutional design, instead representing the product of trade-offs among competing interests. Rather than proposing a wholesale replacement of current approaches, therefore, we point to three key changes to current objectives that would facilitate a transition to more climate-adaptive conservation legal frameworks. They are:
Social–ecological systems are both spatially and temporally dynamic. Conservation objectives must, therefore, make express allowance for the certainty of change over time. Explicit recognition of change processes could translate into some areas becoming more or less important in conservation terms. As noted by a workshop participant:
Maybe an area that is a reserve stops being a reserve because everything dies back, and it might work the other way because areas that are currently less protected may need to become more protected as refugia because they then become suitable. It is that change over time and ensuring that the provisions of how we divide and designate land use allows for that change.
Given the diverse values of protected areas, we do not advocate that reserves should be degazetted if they no longer protect threatened species. It will be important to ensure that adaptive management approaches are used to promote more conservation-focused objectives, and not to justify dilution or compromise of protected area values. Rather, protected areas should be designed and managed in ways that facilitate adaptation to change. Diverse habitats in good condition currently in reserves are most likely to be resilient to change in the short term and have greater adaptive capacity in the long term. More dynamic and flexible practices could be promoted by requiring, through appropriate legislative mandates, that protected areas be managed for adaptation to climate change (Scott and Lemieux 2005).
The legal definition of habitat also needs to change. As noted previously, in Tasmania, it relates only to the area currently occupied by a listed species. The glossary to the Australian Capital Territory’s recent Nature Conservation Act 2014 (NCA ACT), on the other hand, defines “habitat” as an area that is or “was once occupied (continuously, periodically or occasionally) by an organism or group and into which organisms of that kind have the potential to be reintroduced.” This definition recognizses that changes to species distributions may leave ecological niches unfilled. Whether the NCA ACT definition will result in practical changes to conservation practice remains to be seen, but the new Act does afford greater flexibility in the areas deserving of conservation management.
Having a stronger focus on drivers of change may also involve a shift in focus from in situ conservation to ensuring continued existence of species, with the specific locations and abundances of species seen as transient (Dunlop et al. 2013). This would then pave the way for wider use of assisted colonization, translocation of species and the establishment of “insurance populations” in areas that are more climatically favorable. New legal tools may be required to facilitate such measures. A shift toward ex situ or translocated conservation would also involve a change to the objectives of protected area management. Although some areas may remain subject to restoration-oriented interventions that are aimed at maintaining the area at some historical baseline, most will either be allowed to adapt naturally to changing conditions or have innovative or experimental interventions aimed at accommodating new species and ecological assemblages.
Adaptive conservation objectives may need to focus more on system functions:
At the moment, we have an obsession with compositional [biodiversity]—what’s there, but we also need to take into account the structural—how it’s arranged, and the functional—what it does. (Workshop participant.)
This may mean conservation effort is directed toward non-native or neonative species that perform important functions in the landscape. Emphasis on systems over species may also demand reconsideration of the way that rarity is prioritized in Australian conservation law, with a possible shift toward a triage-approach to in situ conservation efforts (Weins and Hobbs 2015). However, there may be powerful social, cultural, and recreational reasons for ongoing conservation of iconic species, so the difference may initially be one of emphasis rather than object.
A systems approach also involves a shift in focus to planning at multiple scales, for the benefit of both species and ecosystems and ecological processes (Heller and Zavaleta 2009, Mawdsley et al. 2009, Polak et al. 2015). Although multiscalar management has been advocated for some time, it has enjoyed limited uptake. As noted by a workshop participant:
The Hawke review [of the EPBC Act] in 2009 recommended that the scale at which we consider and manage biodiversity be raised ... to look at landscapes and whole ecosystems and I think that has yet to happen.
A multiscaled approach that seeks to maintain ecosystem health under changing conditions could see bioregional scale objectives aimed at protecting ecosystems and species-level objectives that do not focus on maintaining current distributions (Dunlop et al. 2013). Explicit adoption of multiscalar approaches can provide a stronger rationale for concentrating on connectivity, including altitudinal and latitudinal connectivity along the land–coast–ocean continuum, and between public protected areas and private land. Recognition of unique qualities at local scales will also enable some areas to be managed in isolation in order to contain or keep out threats such as fire, invasive species, and disease (Prober and Dunlop 2011). A nested and integrated set of objectives across multiple scales could also help address problems of regulatory fragmentation and duplication that arise from Australia’s federal model of environmental governance and enable regulatory efforts to focus on the scale at which relevant stressors and human activities occur.
The characteristics of law that ensure its stability, predictability, and consistency over time also make it slow to change. Enduring reform also requires strong community consensus. It is both likely and in some ways desirable that the conversation about reforming conservation objectives will take some time. From the considerations discussed in the previous sections, we have identified requirements for formulating legal objectives for biodiversity conservation and how the barriers to implementing such reforms might be addressed.
Even if the need for reform of conservation objectives is accepted, their ideal form is not settled (Hagerman et al. 2010). Similarly, it is not clear whether it would be better to prescribe changes to current conservation goals through existing legal instruments, or whether attention should focus on reforming the decision-making processes for conservation. Principle-based regulation defines the broad goals to be pursued and allows managers the freedom to achieve those goals and determine priorities for action and investment in the most efficient way (Bottrill et al. 2008). Parks Victoria took such an approach in developing the most recent protected area management plans for Victoria’s Alpine (Parks Victoria 2014) and South West (Parks Victoria 2015) regions. These plans set broad conservation and management goals at the regional or landscape level, across tenures, ecological systems, and habitats, allowing site-specific management to be determined at the level of individual land parcels. Site-based management plans can thus be updated more regularly because the areas covered are smaller; the plans do not need to accommodate long timeframes and associated uncertainty (including in funding and climate and other ecological impacts); and the site-based plans will not require the same formal review periods that apply to traditional statutory management plans. More regular management planning reviews have the advantage of allowing climatic and ecosystem changes to trigger changes to the formal, agency, and private planning documents, allowing more adaptive approaches to management.
Process goals may provide an alternative framework for prioritizing and formulating intervention under climate change (Heller and Hobbs 2014) as they avoid prescriptions about how nature should be. Yet they lack the measurability required by contemporary adaptive management and accountability standards. The Parks Victoria plans noted above suffer from a similar limitation.
Where objectives are specified, the statutory duty should be to exercise powers and functions to achieve the conservation of biological diversity and ecological integrity and not merely to consider the matter in the exercise of a power or function (Preston 2013). This would overcome a key weakness of current approaches that simply require a balancing of competing interests. Balancing mandates typically privilege development, as decision makers lack clear guidance about how to exercise their discretion in such cases. As one workshop participant from the State Government noted:
[I]n terms of the reality of how government approaches these things, actually identifying priorities—species and values for conservation—becomes a really important part. You can’t do it all so we actually need our legislation to give some sort of guidance on what the priority areas are ... “having regard to” versus “promoting,” those words mean a lot when it comes down to making decisions.
The form that new objectives should take must be part of a wider socio-political project and in thinking about embarking on such a project, it is necessary to consider its practical feasibility. To that end, the next section examines key barriers to reform, including potential adverse side effects, and the factors that might facilitate or enable reform.
Barriers to conservation law reform derive from the way that environmental law generally is structured in the Australian federation, as well as popular views about what our laws should try to achieve (Australian Panel of Experts on Environmental Law (APEEL) 2015). To the extent that Australian law seeks to implement the requirements of the CBD, the emphasis of that convention on in situ conservation may be considered a barrier to reform, particularly given that the instruments of international law are notoriously hard to amend. Nonetheless, the 2014 Conference of the Parties to the CBD highlighted the key role of the Strategic Plan for Biodiversity 2011–2020 for promoting effective implementation of the Convention through a strategic approach, comprising a shared vision, a mission, and strategic goals and the Aichi Biodiversity Targets (CBD 2014). These targets (CBD 2010), although retaining an aspiration to prevent the extinction of known threatened species (Target 12), also promote the importance of well-connected systems of area-based conservation measures that are integrated into the wider landscapes and seascapes (Target 11) and enhance ecosystem resilience (Target 15). There are elements here of systems thinking that provide a basis for more adaptive specification of CBD aspirations. In any event, Australia’s CBD obligations are formally met by the provisions of the EPBC Act. There is very minimal national oversight of the way in which state-based regimes operate, provided they do not undermine or contradict the national approach. Perhaps more problematic, therefore, is the sheer complexity of the conservation framework as it operates at the local level. There, management and development decisions are influenced by local land-use planning provisions and applicable resource development laws, as well as state and Commonwealth conservation requirements relating to specific listed species and ecosystems. Although the mandate of the CBD itself may not operate as a major barrier, therefore, the structural complexity of conservation governance—both horizontally across land-use sectors and vertically across scales of government—may well impede the effectiveness of any change to objectives in state-based conservation laws alone.
The complexity of existing regimes and the cost and resources required to develop the science needed to achieve conservation outcomes under climate change may also drive some reluctance to interfere with current approaches:
Even managing biodiversity as a static thing is really, really challenging, so bringing in climate change adaptation blows it out of the water again. I don’t think we’ve got a good handle on even resilient species and communities, let alone broader concepts and realities. (Workshop participant.)
The most persistent barrier identified by workshop participants was the lack of political will to consider biodiversity conservation generally, or climate-adaptive requirements in particular:
The other thing that is really striking in my work is how much biodiversity is actually perceived to be a dirty word out there.
There was a general view that the public lacked understanding of the future needs of conservation and commitment to reforming current approaches. A strong need was expressed for ways to begin this conversation at a wider community level. Others lamented what they perceived to be a general disinterest in evidence-based decision making in government, and a short-term and electorally popular prioritization of management activities on protecting assets and public safety.
A related issue concerns the fear that developing more achievable and adaptive objectives would simply dilute our commitment to strong conservation outcomes, rather than legitimately attempting to enhance them. Workshop participants did not explicitly identify this as a major barrier, yet it quickly arose as a major issue in subsequent informal discussions among the project team and with staff from environmental NGOs. This may be explained by the way in which options were presented and discussed in the workshop, which did not demand that participants make choices between threatened species approaches and a system-level approach that would tolerate explicit triage.
If a principles-based approach were to be adopted, some of the concern about lowering conservation standards might be allayed by formally adopting the nascent environmental law principle of “nonregression.” Widely recognized in human rights law, nonregression in environmental law reform would mean that existing environment norms could only be revised if the change did not reduce standards of protection (Prieur 2012). It could be argued that embracing a more dynamic and systems-based approach to conservation objectives would not see regression in levels of protection, but would actually enhance conservation outcomes because it ensures that conservation efforts are informed by modern understandings of system dynamism and the multiple scales at which change occurs. Applying the nonregression principle to conservation objectives would mean that conserving individual species would remain a priority but that, in some cases, this would involve greater emphasis on ex situ efforts and on management that would benefit multiple species to promote overall system health. Indeed, it might actually result in a more enduring and meaningful approach to species conservation in the longer term.
A related concern among workshop participants was that merely updating objectives is pointless without addressing failures of implementation through specific legal tools:
At the end of the day they [objectives] have to be achievable, it’s a nonsense to set goals that aren’t achievable.
It’s good to have some aspirational objectives I guess, but the reality of actually getting them into a work plan for a ranger is a long way off.
Objectives at the high level need to flow through in the land-use planning regulations and be quite explicit in statutory instruments ... and at the moment there’s not even clarity in whether species and communities have a place there, so if other concepts around adaptation of biodiversity have relevance, then that needs to be explicit.
Participants saw significant cobenefits in engaging in a wider public conversation about future conservation goals, especially in terms of achieving better alignment of goals in different regimes, addressing nonclimatic stressors, and reducing the economic inefficiencies of multiple legal requirements.
These factors are by no means unique to reforming conservation objectives or conservation law more generally, but they highlight the importance of taking a longer-term view of the overall reform project, and the need for a range of public information and engagement activities. In this regard, it will be essential to use respected and influential “champions” to lead this debate. In a state such as Tasmania, property owners who are engaged in both stewardship activities and economically productive land uses can offer insights into the potential for multiple benefits. Deeper engagement with Indigenous people was also seen as a key enabler of the debate needed to precede future reform.
The fact that Tasmania already has a good reserve system was also considered advantageous because it provides a significant land area over which to accommodate change and range shifts and allows for greater experimentation with multiscale management approaches.
The architecture of conservation laws is currently preoccupied with processes of species listing and development of recovery plans, front-end decision making, and environmental impact assessment procedures, and the establishment of protected areas. Current conservation objectives that seek to have it all in terms of in situ conservation of species and retention of protected areas as undisturbed, natural, or wild are likely to fail under future climate change. Even if they were fully funded, the aspiration to hold things to some historical baseline ignores the reality of current and future climatic shifts. Legal regimes need to be made more agile and responsive to changing ecological needs. In arguing for a shift toward a more dynamic, multiscalar approach to conservation objectives, we acknowledge the inevitability of changes to protected area management plans, greater emphasis on ex situ approaches, and the need for far deeper engagement with private landholders.
Changing conservation objectives alone will not effect these shifts. Reforming objectives is only the first step toward more fundamental changes to the legal tools and instruments that are used to achieve objectives. Indeed, whatever changes are made to objectives in legal regimes whose substantive focus is conservation, parallel reforms are needed in the resource management frameworks with impacts on biodiversity, such as mining, forestry, water, and land management.
Wholesale reform is unlikely to occur rapidly, and we have identified key barriers to reform of both objectives and the conservation regime itself. The ideal form for more adaptive conservation objectives is not settled. Options include the inclusion of principles or broad goals, like those outlined above, or the stipulation of more measurable prescriptions, or revision to processes by which priorities and funding are determined. A pluralistic approach to reform may be the most appropriate strategy, involving overarching, nonspecific conservation goals that operate across landscapes, sectors, and tenures, with more specific, measurable objectives that articulate priorities for action and investment.
The authors gratefully acknowledge the participation of conservation stakeholders in this study.
Australian Panel of Experts on Environmental Law (APEEL). 2015. The next generation of Australia’s environmental laws: introductory paper. APEEL, Sydney, New South Wales, Australia. [online] URL: http://apeel.org.au/#/introductory-paper/.
Bell, P. J. 1998. Ptunarra Brown Butterfly Recovery Plan 1998-2003. Department of Primary Industries, Water and Environment, Hobart, Tas, Australia.
Bellard, C., C. Bertelsmeier, P. Leadley, W. Thuiller, and F. Courchamp. 2012. Impacts of climate change on the future of biodiversity. Ecology Letters 15:365–377. http://dx.doi.org/10.1111/j.1461-0248.2011.01736.x
Bottrill, M. C., N. L. Joseph, J. Carwardine, M. Bode, C. Cook, E. T. Game, H. Grantham, S. Kark, S. Linke, E. McDonald-Madden, R. L. Pressey, S. Walker, K. A. Wilson, and H. P. Possingham. 2008. Is conservation triage just smart decision making? Trends in Ecology & Evolution 23(12):649–654.
Braverman, I. 2014. Conservation without nature: the trouble with in situ versus ex situ conservation. Geoforum 51:47–57. http://dx.doi.org/10.1016/j.geoforum.2013.09.018
Camacho, A. E. 2010. Reassessing conservation goals in a changing climate. Issues in Science and Technology Summer: 21–27.
Chaffin, B. C., H. Gosnell, and B. A. Cosens. 2014. A decade of adaptive governance scholarship: synthesis and future directions. Ecology and Society 19(3):56. http://dx.doi.org/10.5751/ES-06824-190356
Convention on Biological Diversity (CBD). 2010. Aichi Biodiversity Targets. [online] URL: https://www.cbd.int/sp/targets/
Convention on Biological Diversity (CBD). 2014. COP 12 Decisions, [online] URL: https://www.cbd.int/decisions/cop/?m=cop-12
Craig, R. K. 2010. “Stationarity is dead”—long live transformation: five principles for climate adaptation law. Harvard Environmental Law Review 34:9.
Department of Primary Industries, Water and Environment (DPIPWE). 2013. Natural heritage strategy 2013–2030. Department of Primary Industries, Water and Environment, Hobart, Tasmania, Australia.
Dunlop, M., H. Parris, P. Ryan, and F. Kroon. 2013. Climate-ready conservation objectives: a scoping study, final report. National Climate Change Adaptation Research Facility, Gold Coast, Queensland, Australia.
Fitzsimons, J. A. 2015. Private protected areas in Australia: current status and future directions. Nature Conservation 10:1–23. http://dx.doi.org/10.3897/natureconservation.10.8739
Garnett S., and K. Zander. 2014. Finding new nests for birds threatened by climate change. The Conversation: 11 August 2014. [online] URL: https://theconversation.com/finding-new-nests-for-birds-threatened-by-climate-change-28720
Gillson, L., T. P. Dawson, S. Jack, and M. A. McGeoch. 2013. Accommodating climate change contingencies in conservation strategy. Trends in Ecology & Evolution 28(3):135–142. http://dx.doi.org/10.1016/j.tree.2012.10.008
Hagerman, S., H. Dowlatabadi, T. Satterfield, and T. McDaniels. 2010. Expert views on biodiversity conservation in an era of climate change. Global Environmental Change 20(1):192–207. http://dx.doi.org/10.1016/j.gloenvcha.2009.10.005
Hagerman, S. M., and T. Satterfield. 2014. Agreed but not preferred: expert views on taboo options for biodiversity conservation, given climate change. Ecological Applications 24(3):548–559. http://dx.doi.org/10.1890/13-0400.1
Harris, R. M. B., O. Carter, L. Gilfedder, L. L. Porfirio, G. Lee, and N. Bindoff. 2015. Noah’s ark conservation will not preserve threatened ecological communities under climate change. PLoS One 10(4): e0124014. http://dx.doi.org/10.1371/journal.pone.0124014
Harris, R. M., L. L. Porfirio, S. Hugh, G. Lee, N. L. Bindoff, B. Mackey, and N. J. Beeton. 2013. To be or not to be? Variable selection can change the projected fate of a threatened species under future climate. Ecological Management and Restoration 14:230–234.
Heller, N. E., and R. J. Hobbs. 2014. Development of a natural practice to adapt conservation goals to global change. Conservation Biology 28(3):696–704. http://dx.doi.org/10.1111/cobi.12269
Heller, N. E., and E. S. Zavaleta. 2009. Biodiversity management in the face of climate change: a review of 22 years of recommendations. Biological Conservation 142(1):14–32.
Hoegh-Guldberg, O., and J. F. Bruno. 2010. The impact of climate change on the world’s marine ecosystems. Science 328(5985):1523–1528. http://dx.doi.org/10.1126/science.1189930
Hutchinson, T., and N. Duncan. 2012. Defining and describing what we do: doctrinal legal research. Deakin Law Review 17:83–119. http://dx.doi.org/10.21153/dlr2012vol17no1art70
Intergovernmental Panel on Climate Change (IPCC). 2014. Climate change 2014: impacts, adaptation, and vulnerability. Part B: regional aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press, Cambridge, UK; New York, New York, USA.
Koontz, T. M., D. Gupta, P. Mudliar, and P. Ranjan. 2015. Adaptive institutions in social-ecological systems governance: a synthesis framework. Environmental Science & Policy 5(B):139–151. http://dx.doi.org/10.1016/j.envsci.2015.01.003
Laurance, W. F., D. C. Useche, J. Rendeiro, M. Kalka, C. J. A. Bradshaw, S. P. Sloan, S. G. Laurance, M. Campbell, K. Abernethy, P. Alvarez, V. Arroyo-Rodriguez, P. Ashton, J. Benitez-Malvido, A. Blom, K. S. Bobo, C. H. Cannon, M. Cao, R. Carroll, C. Chapman, R. Coates, M. Cords, F. Danielsen, B. de Dijn, E. Dinerstein, M. A. Donnelly et al. 2012. Averting biodiversity collapse in tropical forest protected areas. Nature 489(7415): 290-294. http://dx.doi.org/10.1038/nature11318
Mawdsley, J. R., R. O’Malley, and D. S. Ojima 2009. A review of climate-change adaptation strategies for wildlife management and biodiversity conservation. Conservation Biology 23(5):1080–1089. http://dx.doi.org/10.1111/j.1523-1739.2009.01264.x
McCormack, P., and J. McDonald. 2014. Adaptation strategies for biodiversity conservation: has Australian law got what it takes? Environmental and Planning Law Journal 31:114–136.
McDonald, J. 2011. The role of law in climate change adaptation. Wiley Interdisciplinary Reviews: Climate Change 2(2):283–295.
Meine, C. 2015. A letter to the editors: in defense of the relative wild. Pages 84–95 in B. A. Minteer and S. J. Pyne, editors. After preservation. University of Chicago Press, Chicago, Illinois, USA.
Parks Victoria. 2014. Greater Alpine National Parks draft management plan. Parks Victoria, Melbourne, Victoria, Australia. [online] URL: http://parkweb.vic.gov.au/__data/assets/pdf_file/0012/629688/Alpine-Draft-Management-Plan-June-2014.pdf.
Parks Victoria. 2015. Ngootyoong Gunditj Ngootyoong Mara South West management plan. Parks Victoria, Melbourne, Victoria, Australia. [online] URL: http://parkweb.vic.gov.au/__data/assets/pdf_file/0003/662763/NGNM-South-West-Management-Plan.pdf.
Polak T., J. E. M. Watson, R. A. Fuller, L. N. Joseph, T. G. Martin, H. P. Possingham, O. Venter, and J. Carwardine. 2015. Efficient expansion of global protected areas requires simultaneous planning for species and ecosystems. Royal Society Open Science 2:150107. http://dx.doi.org/10.1098/rsos.150107
Preston, B. J. 2013. Adapting to the impacts of climate change: the limits and opportunities of law in conserving biodiversity. Environmental and Planning Law Journal 30:375–389.
Prieur, M. 2012. Non-regression in environmental law. Surveys and Perspectives Integrating Environment and Society 5:2.
Pritchard, D. J., and S. R. Harrop. 2010. A re-evaluation of the role of ex situ conservation in the face of climate change. Botanic Gardens Conservation International 7(1):1–4. [online] URL: https://www.bgci.org/resources/article/0632/
Prober, S. M., and M. Dunlop. 2011. Climate change: a cause for new biodiversity conservation objectives but let’s not throw the baby out with the bathwater. Ecological Management & Restoration 12(1):2–3. http://dx.doi.org/10.1111/j.1442-8903.2011.00563.x
Rickards, L., J. Wiseman, T. Edwards and C. Biggs. 2014. The problem of fit: scenario planning and climate change adaptation in the public sector. Environment and Planning C 32(4):641–662. http://dx.doi.org/10.1068/c12106
Scott, D., and C. Lemieux. 2005. Climate change and protected area policy and planning in Canada. The Forestry Chronicle 81(5):696–703. http://dx.doi.org/10.5558/tfc81696-5
Seabrook L., C. A. McAlpine, and M. E. Bowen. 2011. Restore, repair or reinvent: options for sustainable landscapes in a changing climate. Landscape and Urban Planning 100:407–410. http://dx.doi.org/10.1016/j.landurbplan.2011.02.015
Steffen, W., A. A. Burbidge, L. Hughes, R. Kitching, D. Lindenmayer, W. Musgrave, M. Stafford Smith, and P. A. Werner. 2009. Australia's biodiversity and climate change: a strategic assessment of the vulnerability of Australia’s biodiversity to climate change. Australian Government, Canberra, Australian Capital Territory, Australia.
Wiens, J. A., and R. J. Hobbs. 2015. Integrating conservation and restoration in a changing world. BioScience 65(3):302–312. http://dx.doi.org/10.1093/biosci/biu235 | <urn:uuid:b8a6989f-9c67-4f54-9799-bd6808963e59> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ecologyandsociety.org/vol21/iss2/art25/",
"date": "2016-09-26T22:26:59",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9060471057891846,
"token_count": 11419,
"score": 2.5625,
"int_score": 3
} |
Freshwater - Why care? (Hungary)
Water is an essential requisite for our life, indispensable element of ecosystems, landscapes and an important factor for economic development. It is a renewable, but vulnerable resource in Hungary.
The country’s location (a basin surrounded by the Carpathian Mountains, being part of the Danube catchment area) and its spatial/geographical features determine its relief, hydrological and meteorological characteristics. (see Map 1)
The climate is moderate with a strong continental influence. The average yearly precipitation is around 600mm, yet showing an uneven distribution in space and time. Droughts are common, especially in the south-eastern part of the country.
The catchments of our rivers - with a few exceptions – are located in the mountainous areas of the surrounding countries. 96% of all surface water comes from outside our borders. Our surface water resources and the rivers’ flow regime are characterised by high spatial and temporal variability. Two-thirds of the country consists of flatlands, the majority of which is outlet, extremely low-lying and exposed to flooding. The area exposed to flood risk extends to 21 248 km2 (see Map 2).
In addition, there is a substantial risk of inland flooding - the total inundated area can be as high as 44 890 km2 (see Map 3). Thus, river and inland flooding, as well as droughts are key issues in Hungary.
As to our lake waters – including Lake Balaton, the largest shallow lake in Central Europe – these are fully or partially protected as wetlands of international importance under the Ramsar Convention (altogether 28 Hungarian wetland habitats are enlisted on the Ramsar List with a total area of 233 000 ha - see Part C Biodiversity for more details).
Oxbow lakes are of special importance, serving diverse purposes (nature protection, fisheries, irrigation) and as a buffer for inland water. Surface waters make up an important component of the National Ecological Network (see Part C Biodiversity for more details).
Groundwater resources in Hungary are substantial and abundant (also in the European context), providing around 95% of the drinking water supply. With regard to the fact that two thirds of public water supply is covered from vulnerable water sources, protection of these is a high priority of water management.
Hungary is well known for its richness in thermal waters (see Box1). A large part of these are recognised word-famous mineral and thermal waters with a favourable composition and therefore are under protection.
For references, please go to www.eea.europa.eu/soer or scan the QR code.
This briefing is part of the EEA's report The European Environment - State and Outlook 2015. The EEA is an official agency of the EU, tasked with providing information on Europe's environment.
PDF generated on 27 Sep 2016, 01:28 AM | <urn:uuid:b89ec5d5-4016-4dfc-a189-09bc56c8d220> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.eea.europa.eu/soer/countries/hu/freshwater-why-care-hungary-1",
"date": "2016-09-26T23:28:35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9348868727684021,
"token_count": 595,
"score": 3.5,
"int_score": 4
} |
Is it possible to differentiate between lightning related surges and artificially generated electrical surges?
A power surge is a condition where there are voltage variations from normal levels (110 Volts for a single phase system). These variations can be caused naturally by events such as lightning or artificially by many different events, ranging from simple power interruptions to significantly more complex events such as harmonics on the power line. Both sources of surge can cause similar types of damage to equipment including: premature failure of motors, computer and communication equipment mis-operation (lockups), loss of equipment following a thunderstorm and other symptoms of failure.
There are subtle, but significant differences between the nature of damages caused by the two types of surge (artificially vs. naturally generated). The cause of these differences is related to surge magnitude and the duration of the surge. Magnitude refers to the amount of energy contained in the power surge, while duration refers to the amount of time the power surge was sustained.
Lightning-related surge has a significantly larger amount of energy (magnitude) compared to artificially generated surges. Despite the greater amount of energy, lightning lasts for a very short period of time (duration) when compared to many types of artificially generated surges. As a result, differences in magnitude and duration lead to different signs of damage.
Damage caused by lightning tends to be catastrophic and localized, while damage caused by artificially generated surges is less severe but more widespread. Due to these differences, the nature and extent of damage sustained by a piece of equipment could indicate the source of a power surge.
Typically, when components show signs of prolonged overheating (melting, discoloration, widespread smoke contamination), an artificial power surge becomes the primary suspect of the equipment failure.
On the other hand, when the damage is localized and extensive such as blown components, as shown below, lightning becomes the primary suspect. Lightning is likely the cause when signs of damage indicate a surge with high magnitude, but without enough duration to cause widespread damage. In most cases, a visual inspection can provide significant information about the extent of damage.
In addition to the nature and extent of damage, the type and design of the damaged component can tell a great deal regarding the source of a power surge. Once the exact damaged component (or part) inside a piece of equipment is identified, one can use the information to identify the source of a surge. This is because lightning surges can be conducted (or induced) through any part that connects a piece of equipment to the outside world, including communication ports and power supplies, while an artificially generated power surge can only be conducted through the power supply.
For example, if equipment damages consist of failed modems, network cards, TV tuners or other non-power related components, then the damage is likely the result of lightning-related surge that would have been induced on the communication lines. Other forms of power surge could then be eliminated based on available information and other observations. On the other hand, if the damage is limited to power supplies or other power related components, then damage could be the result of either lightning or other forms of power surge.
The events that led up to the claimed equipment damage also provide significant information that can be used to identify the source of a power surge. A lightning related surge has to accompany a thunderstorm, and lightning would be in the area on the date and time of the loss. On the other hand, an artificially generated power surge is typically caused by an event that could either be inside or outside the insured's premises. Identifying the event that led to the loss is necessary to determine the exact cause and source of the power surge.
Answer: Yes; a thorough and detailed investigation of the nature and extent of damage sustained by a piece of equipment by a qualified person, can differentiate between lightning related surge and artificially generated surge.
LWG Consulting is a global leader in Forensic Engineering & Recovery Solutions. They provide Cause & Origin, Failure Analysis, Fire and Explosion Investigations, Accident Reconstruction, Damage Evaluations and Equipment Restoration Services following disasters of all kinds. LWG has served the insurance, legal and risk management industries for over 25 years. Their Experts travel globally from 19 offices located across the U.S., Canada, the U.K. and Singapore.
©Copyright - All Rights Reserved
DO NOT REPRODUCE WITHOUT WRITTEN PERMISSION BY AUTHOR. | <urn:uuid:93aecf57-c9da-4299-860e-274fbcf464e0> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.experts.com/Articles/Identifying-Source-of-Power-Surge-By-LWG-Consulting",
"date": "2016-09-26T22:26:54",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9425363540649414,
"token_count": 891,
"score": 3.515625,
"int_score": 4
} |
mineral water, spring water containing various mineral salts, especially the carbonates, chlorides, phosphates, silicates, sulfides, and sulfates of calcium, iron, lithium, magnesium, potassium, sodium, and other metals. Various gases may also be present, e.g., carbon dioxide, hydrogen sulfide, nitrogen, and inert gases. Ordinary well or spring water, in contrast, contains far fewer substances, mostly dissolved sulfates and carbonates, and calcium and other alkali and alkaline earth metals. Many mineral waters also contain trace elements that are thought to have therapeutic value. Spa therapy, widely practiced in Europe, advocates bathing in and drinking mineral waters as a cure for a variety of diseases. Many authorities believe that the success of such therapy really results from the beneficial effects of rest and relaxation. Famous European resorts include Bath, Spa, Aix-les-Bains, Aachen, Baden-Baden, and Karlovy Vary (Carlsbad). Prominent among resorts in the United States are Poland, Maine; Saratoga Springs, N.Y.; Berkeley Springs and White Sulphur Springs, W.Va.; Hot Springs, Ark.; French Lick, Ind.; Waukesha, Wis.; and Las Vegas Hot Springs, N.Mex. Many mineral waters are now prepared synthetically, the various mineral ingredients being added to ordinary water in proportions determined by careful chemical analysis of the original ingredients. See spring.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | <urn:uuid:1400fa57-1e76-4582-b4b3-9d323b7d2f17> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.factmonster.com/encyclopedia/science/mineral-water.html",
"date": "2016-09-26T22:37:19",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8644843697547913,
"token_count": 320,
"score": 2.953125,
"int_score": 3
} |
The common heritage of mankind concept is relatively new in international law. The author recalls that fairly similar concepts exist in certain municipal law systems, in particular in Roman law. He then attempts to analyse the legal bases of this concept in international law. For this purpose he examines in turn the opinions of lawyers, the practice of states - especially at the time of the first Conference on the Law of the Sea - as well as a series of international treaties relating in particular to the Law of the Sea, to the use of outer space and to human rights.
Having identified the essential elements of such a definition, the author queries the future of this concept in the international law of tomorrow. He notes that this future depends on the capacity of the human race as an institution to adapt itself and to learn how to balance the different interests involved.
The author concludes by drawing up a list of the assets, resources, rights and interests which, in his opinion, form part of the common heritage of mankind. He includes the universe, the cosmos, the sun, the moon, the stars and all the other celestial bodies, outer space, the territories of the Polar regions, the sea-bed beyond the limits of national jurisdiction, the high seas and the air space above them. To this celestial and terrestrial heritage he adds a spiritual heritage composed of the fundamental human rights which mankind may invoke as well as a cultural heritage comprising intellectual and industrial possessions and certain cultural assets which bear witness to the history of the civilization of the human race. | <urn:uuid:546264fd-f497-4191-b349-e99122420458> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.fao.org/docrep/s5280T/s5280t15.htm",
"date": "2016-09-26T23:10:46",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9368340969085693,
"token_count": 302,
"score": 2.609375,
"int_score": 3
} |
Glucose Monitoring in Neonates:
Glucose determination in the blood or plasma is used in the diagnosis, monitoring and treatment of persons with diabetes mellitus. Neonates (full term birth to 30 days old) and premature infants (gestational age in weeks at birth up to original due date) represent a mixed group of individuals in whom glucose monitoring is crucial and who have a physiology that differs significantly from older children and adults. Prevention or treatment of hypoglycemia (low blood sugar) in this group is just as important as the recognition and treatment of hyperglycemia (high blood sugar.)
Most full term neonates are born with glucose reserves that last for about 6-24 hours and if feeding or intravenous nutrition is not started in that time period, glucose levels can fall dangerously low. Since if left untreated the child can die or survive with serious impairment, it is common practice to measure the blood glucose of neonates.
When “bedside” glucose monitors were first used for neonates, it was discovered that increased hematocrit levels could interfere with the function of some glucose monitors and produce false low glucose results. The precise mechanism of this interference is debated and may be method dependant. Some people think that the blood may interfere with color based reactions by interfering with light transmittance. An analogy is trying to count the raised fingers on someone’s hand when seen through glass versus tissue paper. In the first case it is easy. In the second situation the hand with some fingers may be recognized but the precise number is undefined. For other methods the blood may be too thick for the chemical reactions to go to completion so less glucose is detected.
FDA has published guidelines to aid the makers of these devices and several have been cleared for use in the neonatal period.
Clinicians should carefully review the claims made to help determine if the device in question is appropriate for neonatal use. It is also a good idea to perform a literature search to see if there are studies published in refereed journals investigating the candidate glucose meter in the population in question (e.g.: healthy full term newborns or critically ill preterm neonates.) Even so, no matter what device is used it is crucial that the end user clinicians validate the meters they are considering for use. The clinicians should work closely with the central lab and compare candidate devices against their central laboratory’s current reference method. Given the acuity of care, it may also be a good idea to periodically reevaluate these meters against the reference standard and to develop quality assurance and quality control programs.
In closing, increased hematocrit may interfere with some blood glucose meters. If a result does not make sense the clinician needs to do two things. First, reassess the patient. Second, if it is clinically appropriate, get a new sample and verify the result using an accepted reference method.
Anonymous. Review Criteria Assessment of Portable Blood Glucose Monitoring In Vitro Diagnostic Devices Using Glucose Oxidase, Dehydrogenase or Hexokinase Methodology (Draft Document). FDA Guidance Document February 14, 1996.
Girouard J, et al. Multicenter Evaluation of the Glucometer Elite XL Meter; an Instrument Specifically Designated for Use With Neonates. Diabetes Care. 23(8); 1149-1153. August 2000.
Henry JB. Clinical Diagnosis and Management by Laboratory Methods, 19’th Ed. Philadelphia, WB Saunders, 1996. pp 197-199.
Perkins SL, et al. Laboratory and Clinical Evaluation of Two Glucose Meters for the Neonatal Intensive Care Unit. Clinical Biochemistry. 31(2); 67-71. 1998.
Soldin SJ, Devairakkam PD, Adarwalla PK. Evaluation of the Abbot PCx® Point of Care Glucose Analyzer in a Pediatric Hospital. Clinical Biochemistry. 33(4); 319-321. 2000.
St-Louis P, Ethier J. An Evaluation of three glucose meter systems and their performance in relation to criteria of acceptability for neonatal specimens. Clinical Chimica Acta. 322; 139-148. 2002. | <urn:uuid:933a5d80-7279-419d-a63d-598de09a16c8> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.fda.gov/MedicalDevices/Safety/AlertsandNotices/TipsandArticlesonDeviceSafety/ucm109513.htm",
"date": "2016-09-26T22:26:22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9074827432632446,
"token_count": 865,
"score": 3.234375,
"int_score": 3
} |
US defense R&D body DARPA is developing implantable biosensors which would encourage the body to maintain itself.
DARPA’s ElectRx program aims to use neuromodulation of organ functions to help the human body heal itself.
Neuromodulation refers to the body’s peripheral nervous system’s actions of constantly monitoring the status of internal organs to regulate biological responses to infection, injury or other imbalances.
This regulatory process can sometimes be affected by injury or illness, causing the peripheral nerve signals to exacerbate a condition, causing pain, inflammation or immune dysfunction.
The ElectRx program aims to control the regulatory process, and could fundamentally change the manner in which doctors diagnose, monitor and treat injury and illness.
This system would use tiny, intelligent implants to continually assess conditions, modulate nerve circuits and provide stimulus patterns tailored to help maintain healthy organ function, helping patients get healthy and stay healthy using their body’s own systems.
DARPA also hopes that by developing ElectRx technologies, it can boost scientific research aimed at achieving a more complete understanding of the structure and function of specific neural circuits and their role in health and disease.
The ElectRx program will require new technologies for in vivo sensing and neural stimulation, including advanced biosensors and novel optical, acoustic and electromagnetic devices to achieve precise targeting of individual or small bundles of nerve fibres that control relevant organ functions.
DARPA hopes to create minimally-invasive, ultra-miniaturised devices that can precisely target the relevant areas of the body. Ideally, these devices would be injectable through a needle. | <urn:uuid:da072e44-ab62-4719-b86c-3bf68a53f62d> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ferret.com.au/articles/news/miniaturised-implantable-biosensors-to-help-body-regulate-itself-n2517524",
"date": "2016-09-26T22:29:04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.898391842842102,
"token_count": 338,
"score": 3.078125,
"int_score": 3
} |
In addition to the Space Ship Craft – I thought it would be fun to teach my Little Chefs the different Phases of the Moon. Each night, especially in the summer time, they love to find the Moon – to see what “shape” it is.
I have said before, that my Little Chefs and I LOVE Oreos! And while eating them one day, Little Chef E realized that the creme looked like a Moon. So I thought – what a fun way to teach them the different Phases of the Moon…..with Oreos!
We created a printable Phases of the Moon for you to use. And all we did is separate our Oreos – tried to get all the creme onto one side, and being careful not to break them. Then with a spoon – shape the creme to the different Phases of the Moon. The Little Chefs had a great time making their Moon Phases….. and of course then Eating them! I then realized that there were 8 moon phases, which meant I just let my Little Chefs eat 8 Oreo Cookies!!!
Just another simple and fun activity to teach them a little bit about the Moon to go along with our Celebrating National Space Day! Hope you enjoy your Oreos and learning about the Moon!!
- Printable Phases of the Moon Sheet
- 8 Oreo Cookies
- Spoon to shape the creme | <urn:uuid:19732f1a-e8f9-4fe0-9b55-8ab042fb1d91> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.fivelittlechefs.com/craft/easy-crafts/phases-of-the-moon.html",
"date": "2016-09-26T22:21:57",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9507635235786438,
"token_count": 287,
"score": 2.796875,
"int_score": 3
} |
Unusually extensive ice cover on the Great Lakes has caused some delays in getting U.S./Canadian shipping lanes open for the spring, but the same cover has aided tourism in other areas of the five lakes.
The United States Coast Guard will be working with the Canadian Coast Guard with ice-breaking operations on Lake Superior, the Canadian Coast Guard said in a news release.
There has been an increased demand for ice-breaking assistance during the winter of 2013-14, the Canadian Coast Guard said. The conditions this winter haven't been seen in eastern Canada since 1994; that year, the St. Lawrence Seaway wasn't open for traffic until April 5.
Due to unusually heavy ice conditions, the opening date for the 2014 navigation season for the
Montreal/Lake Ontario Section has been pushed back to March 31, 2014, the St. Lawrence Seaway Management Corp. said.
Overall, the ice cover on the five lakes was up to 92 percent ice coverage in early March, the greatest ice cover there since 1979.
But at the same time, the increased ice cover could translate into slightly higher water levels on the lakes to not only aid shipping, but also help fishing, AccuWeather.com Canadian Weather Expert Brett Anderson said.
Low Great Lakes water levels can limit navigability of shipping channels and reduce hydropower capacity such as at Niagara Falls, which is the largest electricity producer in New York state. It can also impede tourism and recreational activities, and increase operational risks for industries that rely on the lakes as a source of processing and cooling water, the National Oceanic and Atmospheric Administration said.
"The greater ice coverage may slightly increase water levels by reducing the amount of evaporation. Also, the higher-than-normal snowpack across the region would likely increase runoff that eventually spills into the lakes," Anderson said.
While the Great Lakes ice may increase water levels, it also boosted tourism on the Great Lakes by making previously inaccessible areas open to the public.
The Apostle Islands ice caves at Bayfield, Wis., attracted huge crowds this past season due to the increased ice cover.
More than 138,000 people visited during the ice season, according to the Apostle Islands National Lakeshore. It was 93 percent of last year's annual visitation for the whole park and 81 percent of the average annual visitation since 2000.
The increased visitation brought with it an estimated $10 million to $12 million boast to the Bayfield area economy, Bayfield Chamber of Commerce and Visitor Bureau Executive Director David Eades said.
"This has been an amazing boon to the regional economy and although we would love this to continue for another month, the safety of the visitor always comes first," Eades stated. "Although the sea caves will not be accessible again until summer, the area still has plenty of snow for all the other winter
activities in the area."
Overall, the Apostle Islands National Lakeshore brings a $24 million tourism benefit to communities
surrounding the park, according to a newly released National Park Service report. The visitor spending supported 330 jobs in the local area.
It's unclear how long ice cover will linger on the lakes, but it could affect the weather further into the spring.
"Later ice cover would tend to have a cooling influence in spring near the surface with stronger lake breezes through the spring," Anderson said. "It would also have a stabilizing effect on the atmosphere and possibly reduce the threat for strong thunderstorms." | <urn:uuid:a348d781-1a71-4d78-8dc6-de9cb75baebf> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.foxnews.com/weather/2014/03/26/extensive-great-lakes-ice-hurts-shipping-helps-tourism/",
"date": "2016-09-26T23:59:35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9592040777206421,
"token_count": 707,
"score": 2.59375,
"int_score": 3
} |
Natural Gas in Philadelphia:
Sources, Uses, and Prices
RCED-84-135: Published: May 23, 1984. Publicly Released: Jun 22, 1984.
- Full Report:
Pursuant to a congressional request to provide information on the natural gas situation in Philadelphia from 1975 to 1983, GAO focused on the operations of a municipally owned and regulated distribution company, the Philadelphia Gas Works (PGW) and its two major suppliers.
GAO found that PGW total sales and number of customers fluctuated between 1975 and 1983 because of economic conditions, population trends, and the company's policies on accepting new customers. The company's sales rise and fall considerably during the course of a year due to a high proportion of residential customers who depend on natural gas for heating. During the 8-year period reviewed, the average gas price increased by more than 300 percent. Prices to residential customers whose supply is ensured rose by a higher percentage than prices to businesses. Customer prices rose to offset operating expenses such as increased cost of gas from suppliers, processing, distribution, marketing, administrative expenses, taxes, and depreciation. In September 1983, PGW reduced its rates by 9 percent. This reduction was primarily due to refunds from its pipeline company suppliers. In addition, the company proposed rate reductions and changes intended to increase sales to businesses which can use other fuels. As natural gas prices increased, more customers found it difficult to pay their bills. Various federally funded or mandated and company-sponsored programs were made available to assist consumers. These programs included local restrictions as to when service could be disconnected, provisions for restoration of service, financial assistance to help customers pay current bills, and energy conservation assistance. | <urn:uuid:14baa202-6267-40c1-9611-239ab68a8fe6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.gao.gov/products/RCED-84-135",
"date": "2016-09-26T22:50:16",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9573763012886047,
"token_count": 343,
"score": 2.640625,
"int_score": 3
} |
Create an inviting space to your garden and outdoor landscape by building a branch trellis. Install the trellis inside your garden or along a wall for a dramatic focal point. Use limbs and twigs found around your yard or visit your local home improvement store for treated wood you could use to create a beautiful trellis. Plant flowers and vines up your trellis for a natural wall of blooms that will give your space a vibrant and colorful design. Use the trellis to break up two spaces in your yard as a makeshift wall or lawn partition.
Pick out a surface that is level so your trellis is straight. Remove all weeds and small rocks to ensure a smooth ground surface. Rake the ground to remove any remaining debris or weeds.
Look around your yard and outdoor space for old branches and twigs you could utilize for your rustic trellis. Young branch saplings are very flexible and can be manipulated into tight-fitting areas of your trellis. After pruning trees or shrubs, consider using those to help create the trellis structure.
Measure the spot in which the trellis will live to help to ascertain the correct dimension of the trellis. Create an outline to refer to as you start building the trellis. Take into account any heavy vegetables or flowers you might cultivate to help choose the appropriate width and height for supporting heavy fruits.
Attach the branches together with wire and nails. Crisscross the branches over one another and form a lattice-shaped pattern. At the cross section of each twig or branch, hammer in one or two small nails. Wrap wire around each cross section once for added support. Secure the ends by twisting the wire.
Allow one quarter of the rustic trellis feet to be sunk into the ground. Dig a hole and place the branch feet into the holes and fill up with soil. Press around the base of the trellis for a secure fit. For added support, secure the branch trellis to a garden fence.
Plant creeping vines at the base of the trellis like honeysuckle or grapevines. Water the plants twice a week to allow the roots to become fully established. | <urn:uuid:61ccfbb2-2ae3-42ec-b411-e728eb0cb085> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.gardenguides.com/79231-build-rustic-trellis-out-branches.html",
"date": "2016-09-26T22:45:40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9063566327095032,
"token_count": 453,
"score": 2.59375,
"int_score": 3
} |
Yucca plants are woody, perennial shrubs native to North America, South America, Central America and some parts of the Caribbean. Yucca is valued for its attractive evergreen foliage, which is topped by showy white flowers that appear on waxy flower spikes during the summer months. Yucca can reach an overall height of up to 10 feet and makes a dramatic statement in the home garden. Yucca plants, hardy in zones 4 through 10, are easy to grow once established and require very little maintenance to thrive.
Plant container-grown yucca plants in spring or fall. Choose a planting location that receives full sun and has well-drained soil. Yucca plants will tolerate a wide range of soil conditions including poor, dry soil, but they perform best when planted in rich soil with good drainage.
Use a garden tiller to loosen the soil at the planting site to a depth of at least 12 inches. Spread a 3- to 4-inch layer of organic compost over the loosened soil and incorporate into the soil with a garden tiller prior to planting.
Dig a planting hole of equal depth and width to the container in which the yucca plant was previously grown. Place the plant into the hole carefully, and then backfill with soil. Water immediately after planting to settle the soil and initiate new growth.
Water yucca plants sparingly, about once every two weeks, during the hot summer months and during periods when two weeks pass without any natural rainfall. Otherwise yucca needs no supplemental watering.
Prune yucca plants after flowering by removing the dead flower stalks with pruning shears. Remove dead leaves, if desired, to improve the aesthetic appeal of your plant, or wait for them to fall off on their own, in late fall or early winter.
Propagate yucca plants by removing the "pups," or small plants that appear around the outside of the mother plant. Use a sharp garden spade to gently dig up the pups and plant in small pots. Grow indoors in high-quality potting soil, water once per week and keep in full sun until roots form. Plant in the garden in spring or fall. | <urn:uuid:9f6cb4af-a8f6-44cc-ae1d-4b8ccd081cad> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.gardenguides.com/90783-growing-yucca-plants.html",
"date": "2016-09-26T22:40:25",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.935610294342041,
"token_count": 448,
"score": 2.96875,
"int_score": 3
} |
Quantum Crystallography (QCr) refers to the combination of structural crystallographic information with quantum-mechanical theory. The objective is to facilitate computational chemistry calculations and thereby enhance the information that may be derived from a crystallographic experiment. This concept has a long history and in recent years has been finding increased attention because of the advances in both theory and computers. Dr. Massa's method for obtaining quantum mechanical molecular energy involves the use of parts of a whole molecule, which in his formalism are called kernels. The individual calculations based on kernels are relatively small, compared to that which would be required to treat an entire molecule all at once. Subsequently, the group sums kernel contributions to obtain the energy for a whole molecule. In so doing they simplify the formidable task of obtaining a true quantum energy calculation for very large molecules. The saving of computational time is significant. The theoretical background for our approach to quantum crystallography may be found in References and additional references therein. | <urn:uuid:64b84eec-8291-4508-a465-a694d647a7b8> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.gc.cuny.edu/Page-Elements/Academics-Research-Centers-Initiatives/Doctoral-Programs/Chemistry/Faculty-Bios/Louis-Massa",
"date": "2016-09-26T22:37:34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9298415184020996,
"token_count": 196,
"score": 2.546875,
"int_score": 3
} |
An international or territorial dispute is a disagreement over the rights of two or more states with regard to control of a given piece of land. International disputes find their roots in a number of issues including natural resources, ethnic or religious demography, and even ambiguous treaties. When left unchecked, international disputes have caused criminal actions, terrorism, wars, and even genocide—all in the name of reasserting rights over territory. The UN Charter in no way allows states to use force to annex territory from any other state: “All Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations.”
Arbitration can be made an appropriate international dispute settlement mechanism for international disputes when arbitration agreements are carefully drafted. Arbitration is especially valuable in contract disputes between a private company located in a Western nation and a government agency or government-controlled company in a developing state as well as in the framework of East-West trade agreements. Parties to international contracts often favor arbitration because compared to litigation they believe it is inexpensive, rapid, informal, generative of consensus, and a means of minimizing or avoiding the need for lawyers. These advantages are partially attainable through the careful structuring of the arbitration agreement, but without the proper agreement they can prove illusory.
If the advantages of arbitration are to be achieved, the drafter of an arbitration clause must be particularly aware of the role of law in arbitration. At its inception, arbitration depends on statutory approval; the arbitral award often must be converted into a judgment for enforcement purposes. Also, throughout the arbitration process, the law intervenes (a factor which a good draftsman of an arbitration agreement should bear in mind). The drafter of an arbitration agreement must also take into account the rules for contesting the validity of an arbitral award in the jurisdiction in which the dispute is heard. Generally, parties to an international contract should not opt for arbitration in the event of a dispute without careful consideration of the reasons for its ues and the thoughtful, precise drafting of the arbitration agreement.
China places restrictions on the rights of foreign warships to exercise innocent passage of territorial waters, claims extensive sovereignty in its Exclusive Economic Zone (EEZ), and has made maritime claims citing historic waters. China asserts that these actions are consistent with the provisions of the United Nations Convention On The Law Of the Sea (UNCLOS) Treaty. The United States does not recognize China's claims and restrictions encroach upon U.S. national rights and interfere with the ability of the theater Combatant Commander PACOM to employ forces in the Western Pacific littoral. PACOM must continue to conduct FON operations to assert U.S. claims while engaging regional partners such as Japan. The U.S. must assist in developing workable solutions to South China Sea maritime disputes that are consistent with U.S. interests.
Some of the claims made by coastal nations are inconsistent with international law. The United States does not recognize those maritime claims that are not in conformity with customary international law, as reflected in the 1982 United Nations Law of the Sea Convention. Examples include excessive straight baseline claims, territorial sea claims in excess of 12 nautical miles (nm), and other claims that unlawfully impede freedom of navigation and overflight. The United States has protested excessive claims and conducted operational assertions against such excessive claims under the Freedom of Navigation Program.
The U.S. Freedom of Navigation (FON) Program began in 1979 and is designed to be a peaceful exercise of the rights and freedoms of navigation and overflight recognized under international law. United States policy is to accept and act in accordance with the balance of interests relating to traditional uses of theoceans--such as navigation and overflight. In this respect, the United States l recognizes the rights of other states in the waters off their coasts, as reflected in the Convention, so long as the rights and freedoms of the United States and others under international law are recognized by such coastal states. In addition, United States policy is to exercise and assert its navigation and overflight rights and freedoms on a worldwide basis in a manner that is consistent with the balance of interests reflected in the convention. The United States will not, however, acquiesce in unilateral acts of other states designed to restrict the rights and freedoms of the international community in navigation and overflight and other related high seas uses.
Although some US operations receive public scrutiny (such as those that have occurred in the Black Sea, in the Gulf of Sidra, and in the South China Sea), most do not. Since 1979, U.S. military ships and aircraft have exercised their rights and freedoms in all oceans against objectionable claims of more than 35 countries at the rate of some 30-40 per year.
|Join the GlobalSecurity.org mailing list| | <urn:uuid:8a951ef3-4af6-4991-aeb2-5069bb7962c3> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.globalsecurity.org/military/world/war/disputes.htm",
"date": "2016-09-26T22:28:20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9390950202941895,
"token_count": 987,
"score": 3.15625,
"int_score": 3
} |
Workers at Kamwo Herb and Tea on Grand Street in Chinatown preparing packets of herbs.
Last week, Britain’s Medicines and Healthcare Products Regulatory Agency (MHRA) issued a press release warning that extreme caution should be used with a number of traditional Chinese medicines (TCMs), because they could contain dangerously high levels of toxins including lead, mercury and arsenic.
The communique noted that these medicines are not authorised for sale in Britain, but can be bought through the Internet.
“People are warned to exercise extreme caution when buying unlicensed medicines as they have not been assessed for safety and quality and standards can vary widely,” it says.
The TCMs named include “Niuhuang Jiedu Pian” (also called by the Indian names “Divya Kaishore Guggul” and “Chandraprabha Vati”) for the treatment of stomatitis, tonsillitis and toothache; “Bak Foong” pills, often used for the treatment of menstrual pain; and “Fa-bao”, a hair tonic for treating baldness.
Earlier this year, Hong Kong authorities ordered the recall of Bak Foong pills and Fa-bao, which were respectively found to exceed by two and 11 times the levels of lead and mercury permitted by the health authority. In July, the Swedish National Food Agency also found extremely high levels of arsenic in Niuhuang Jiedu Pian and warned other European Union countries that it constituted a serious health risk.
Richard Woodfield, MHRA’s Head of Herbal Policy, said in the British agency’s press release: “The adulteration of traditional Chinese medicines with heavy metals is a significant international problem and can pose a serious risk to public health”, urging the public to choose herbal medicines that meet quality and safety standards, and have received the Traditional Herbal Registration certificate on their packaging.
“Natural does not mean safe,” Woodfield concluded.
Since the beginning of this year, TCMs have repeatedly been questioned, mainly by official and private drug-testing organisations. Apart from being potentially toxic, the most common concerns relating to Chinese herbal remedies are that they often contain pesticide residue.
For instance, two recent Greenpeace reports stated that out of the 36 kinds of frequently used Chinese herbal products tested in seven countries (Germany, France, Britain, the Netherlands, Italy, the United States and Canada), pesticide residues were detected in all but one of the products. Among those cited are nine of China’s leading medicinal drug brands such as Tong-Ren-Tang and Yun-Nan-Bai-Yao.
The Chinese medicine industry’s reaction towards safety issues has remained unchanged for years: avoidance. For instance, no businesses responded to the Greenpeace reports. Similarly, a public relations official at one of the companies cited said that his peers all have a standard answer: “We are in line with national standards, EU standards do not apply to China domestically...”
The source added that the issues of pesticide residue not only exists in TCM but also in vegetables and fruits.
“We dare not say anything, otherwise we can easily become the target,” the public relations official said.
In fact, whether it’s Western or Chinese, more or less all medicines produce side effects. Over thousands of years of TCM development, the concept that “all medicines are in a certain sense a sort of poison” has always been widely acknowledged.
The difference is that Western medical practices require the side effects of medicinal substances to be clearly identified, explains Wei Lixin, a member of the Professional Commission of Ethnic Medicines and Traditional Chinese Medicines Standard Substances under the Chinese Pharmacopoeia Commission.
Instead, there are no such elaborate guidelines on toxicity for TCM, notes Wei. As a result, Chinese and foreign commercial health care companies have often touted their herbal preparations to be free of any harm — usually a totally false claim.
Yu Zhibin, of the China Chamber of Commerce for Import and Export of Medicines and Health Products, stressed that among the numerous incidents involving acute TCM side effects in the West one important cause is that these countries do not regard the Chinese medicines as drugs, but as health foods. As a result some of the public ingest the products as though they were natural and non-harmful foods to eat.
He believes that this could be misleading and result in the abuse and misuse of TCMs.
Since so many Chinese medicines contain toxins, how can TCM producers minimise their adverse effects and avoid harming patients?
Traditional medicinal theory has long held that the blending of several herbal ingredients mitigates the toxicity of each of the substances. Blending ingredients forces the toxic substances to react with each other, which dilutes them or causes them to decompose.
China’s Food and Drug Administration stated in February of this year, “The fact that some Chinese medicines contain toxic substances cannot be automatically interpreted as displaying toxic side effects in their clinical application.”
Wei also stressed that when talking about the toxicity of Chinese medicines, the elements of the chemical compound, the dosage and the period of treatment should all be taken into account.
Simply declaring that a certain substance is poisonous can be misleading. He used the example of cinnabar, a commonly used but controversial mineral drug that can be toxic in high doses.
“It’s an incomplete statement to say cinnabar is poisonous. The truth is that it will become harmful to humans in high and extended usage.”
For decades, the Chinese medical community has been trying to use modern Western methodologies to measure the toxicity of herbal remedies. Chen Keji, president of the Chinese Association of Integrative Medicine and a member of the Chinese Academy of Sciences, said that in the current pharmacopoeia, 18.3% of Chinese medicines do not have instructions for dosage or any toxicity analysis.
Many Chinese medicine producers believe, given the present situation, that it will be a long time before authorities can provide a systematic analysis for all substances used in Chinese medicine.
But the effort continues, and at the national level three Chinese medicine safety evaluation centers and four TCM standardization clinical trial centers have been established in recent years. Newly created TCMs are required to satisfy the standards of a safety evaluation system before receiving regulatory approval and having safety guidelines developed.
Wei says he hopes the central government will launch a comprehensive toxicity study, but says it will require an enormous amount of time and energy to conduct scientific safety evaluations for all TCMs. “A toxicity problem is often not addressed until a poisoning incident is reported,” he concludes.
However, the inability of Chinese products to meet the norms of modern medicine and assure their safety has left China behind on the international TCM market, a $30 billion annual business.
Currently, Japanese and South Korean companies claim a 70% share, while China accounts for a mere 5%, which is mostly represented by raw herbal materials exported to Japan and South Korea.
According to Yuan Jinghua, an investment manager in the industry, China launches over 1,000 TCM products every year whereas Japan concentrates its efforts on clinical studies and meeting international standards.
Up until now Chinese TCMs rarely passed the certification of the US Food and Drug Administration.
Without certification it is impossible for them to be used as medicine and thus they are unrecognised by mainstream American medical institutions.
Britain was the first European country to set up proper legal regulation of TCMs, though the majority of TCMs are sold as “health products” or “natural plant foods” because it’s hard for them to pass the norms as drugs.
Shen Zhixiang, the president of the Chinese Folk Medicine Research Association, said that “In reference to safety and effectiveness, Chinese TCM companies have given neither enough effort nor funding”. He reckons that apart from national research projects led by the government, the industry itself has to enhance the required studies.
Only when the fundamental work of TCM pharmacology and toxicity studies are straightened out will the larger business prosper. — Worldcrunch/Caixin Media
LEAVE A COMMENT Your email address will not be published. Required fields are marked*
Managing the economic consequences of nationalism
Rosetta: What did Europe’s comet mission uncover?
Palmer credited with taking golf to the public
Setting conversational boundaries
Experts restore first-ever computer music recording
A new strategy for China’s SOEs
Mothers at the forefront of human-rights activism
The truth about the five-second rule for dropped food
A feat of incredible resilience | <urn:uuid:ad2653a6-5e80-4f21-b764-4d23cccfa516> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.gulf-times.com/story/364053/The-toxic-risks-of-traditional-medicine",
"date": "2016-09-26T22:29:10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9397813677787781,
"token_count": 1810,
"score": 2.6875,
"int_score": 3
} |
People cut to relief pain but its also because they cant control what is happening around them with there life ( someone may have died and they couldn't stop it or Parents break up) things like that. When the level of emotional pressure becomes too high it acts as a safety valve - a way of relieving the tension. Other reasons could be:
Cutting makes the blood take away the bad feelings
Pain can make you feel more alive when feeling numb or dead inside
Punishing oneself in response to feelings of shame or guilt
When it's too difficult to talk to anyone, it's a form of communication about unhappiness and a way of acknowledging the need for help
Really it depends on the person and you will only find out if she wants to tell you.
Abit about depression incase your not sure.
Depression is often an illness. If you're depressed, the usual feelings of sadness that we all experience temporarily remain for weeks, months and years. They can be so intense that daily life is affected. You canít work normally, you donít want to be with your family and friends, and you stop enjoying the things you usually do.
If you're depressed, you may feel worthless, hopeless and constantly tired. In most cases, if you have milder depression, you can probably carry on but will find everyday tasks difficult. If you have severe depression, you may find your feelings so unbearable that you start thinking about suicide.
About one in 10 of us develops some form of depression in our lives, and one in 50 has severe depression. It affects not only those with depression, but also their families and friends.
The good news is that with the right treatment and support, most depressed people make a full recovery. Itís important to seek help from your GP if you think you may be depressed.
Psychological symptoms include:
Feelings of hopelessness and helplessness.
Feelings of guilt.
Feeling irritable and intolerant of others.
Lack of motivation and less interest, and difficulty in making decisions.
Lack of enjoyment.
Suicidal thoughts or thoughts of harming someone else.
Feeling anxious or worried.
Reduced sex drive.
back to top
Physical symptoms, which include:
Slowed movement or speech.
Change in appetite or weight (usually decreased, but sometimes increased).
Unexplained aches and pains.
Lack of energy or lack of interest in sex.
Changes to the menstrual cycle.
Disturbed sleep patterns (for example, problems getting off to sleep or waking in the early hours of the morning).
I think what you are doing is brilliant. you really are a true friend to her and she is lucky to have you. So dont give up
But in the end she does need to see a doctor because medication will help her, I know how she feels about not wanting to go to the doctors. If she still feels like she doesn't want to go. why don't you see if she would go with you and do it together.
I hope this was of some help. | <urn:uuid:fd8bef27-ade1-4cda-8124-2111393b07db> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.healthboards.com/boards/self-injury-recovery/731196-help-friend-who-cuts.html",
"date": "2016-09-26T22:56:39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9590018391609192,
"token_count": 637,
"score": 2.53125,
"int_score": 3
} |
"This is an entirely new approach to study a huge public health issue," Delp said. "It's a completely new tool that is now available to neuroscientists everywhere."
The mice are modified with gene therapy to have pain-sensing nerves that can be controlled by light. One color of light makes the mice more sensitive to pain. Another reduces pain. The light was shone on the paws of mice through the Plexiglas bottom of the cage.
The findings of the research were published online Feb. 16 in Nature Biotechnology. Delp was the senior author. The lead authors were graduate students Shrivats Iyer and Kate Montgomery. The researchers said the study opens the door to future experiments on the nature of pain, touch and other sensations that now are poorly understood.
"The fact that we can give a mouse an injection and two weeks later shine a light on its paw to change the way it senses pain is very powerful," Iyer said.
For example, increasing or decreasing the sensation of pain in these mice could help scientists understand why pain seems to continue in people after an injury has healed. Does persistent pain change those nerves in some way? And if so, how can they be changed back to a state in which, absent an injury, they stop sending pain messages to the brain?
Leaders at the National Institutes of Health agree that the work could have important implications for treating pain. "This powerful approach shows great potential for helping the millions who suffer pain from nerve damage," said Linda Porter, the pain policy adviser at the National Institute of Neurological Disorders and Stroke and a leader of the NIH's Pain Consortium.
The researchers took advantage of a technique called optogenetics, which involves light-sensitive proteins called opsins that are inserted into the nerves. Optogenetics was developed by Delp's colleague Karl Deisseroth, MD, PhD, a co-author of the paper. He has used the technique as a way of activating precise regions of the rodent brain to better understand how the brain functions. Deisseroth is a professor of bioengineering and of psychiatry and behavioral sciences, as well as a Howard Hughes Medical Institute investigator.
Delp, who has an interest in muscles and movement, saw the potential for using optogenetics not just for studying the brain but also for studying the many nerves outside the brain. These are the nerves that control movement, pain, touch and other sensations throughout our body and that are involved in diseases like amyotrophic lateral sclerosis, also known as Lou Gehrig's disease.
A few years ago, Stanford's Bio-X program, which encourages interdisciplinary projects like this one, supported Delp and Deisseroth in their efforts to use optogenetics to control the nerves in mice that excite muscles. In the process of doing that work, Delp said, a student of his at the time, Michael Llewellyn, would occasionally find that he would place the opsins into nerves that signal pain rather than the ones that control muscle.
That accident sparked a new line of research. "We thought, wow, we're getting pain neurons - that could be really important," Delp said. He suggested that Montgomery and Iyer focus on those pain nerves that had been a byproduct of the muscle work.
A key component of the work was a new approach to quickly incorporate opsins into the nerves of mice. The team started with a virus that had been engineered to contain the DNA that produces the opsin. Then they injected those modified viruses directly into mouse nerves. Weeks later, only the nerves that control pain had incorporated the opsin proteins and would fire, or be less likely to fire, in response to different colors of light.
The speed of the viral approach makes it very flexible, both for this work and for future studies, the study's authors said. Researchers are developing newer forms of opsins with different properties. (Current opsins respond to light on the bluish end of spectrum, which doesn't penetrate very deeply into body tissues) "Because we used a viral approach, we could, in the future, quickly turn around and use newer opsins," said Montgomery, a Stanford Bio-X fellow.
This entire project, which spans bioengineering, neuroscience and psychiatry, could never have happened without the environment at Stanford that supports collaboration across departments, Delp said. The pain portion of the research came out of support from NeuroVentures, which was a project incubated within Bio-X to support the intersection of neuroscience and engineering or other disciplines. That project was so successful it has spun off into the Stanford Neurosciences Institute, of which Delp is now a deputy director.
Delp said there are many challenges to meet before new drugs and medical techniques that result from these experiments could become available to people, but that he always has that as a goal.
"Developing a new therapy from the ground up would be incredibly rewarding," he said. "Most people don't get to do that in their careers."
Other Stanford co-authors of the study were postdoctoral scholars Chris Towne, PhD, and Soo Yeun Lee, PhD; and research assistant Charu Ramakrishnan.
The study was supported by the NIH and Bio-X, as well as by fellowships from the Stanford Office of Technology Licensing, the Howard Hughes Medical Institute, the Office of the Vice Provost for Graduate Education and the Swiss National Science Foundation.
- See more at: http://med.stanford.edu/ism/2014/march/pain-0310.html#sthash.WvBhbBz1.dpuf
For advertising and promotion on HealthNewsDigest.com, call Mike McCurdy: 877-634-9180 or [email protected] We have over 7,000 journalists as subscribers. | <urn:uuid:f078b1da-b8b1-4af6-9366-04a3a44002b4> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.healthnewsdigest.com/news/pain%20issues0/Researchers-Increase-Decrease-Pain-Sensitivity-Using-Light.shtml",
"date": "2016-09-26T22:28:36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9713713526725769,
"token_count": 1197,
"score": 3.3125,
"int_score": 3
} |
Congenital Heart Defects
What Are Congenital Heart Defects?
Congenital (kon-JEN-i-tal) heart defects are problems with the heart's structure that are present at birth. These defects can involve:
- The interior walls of the heart
- The valves inside the heart
- The arteries and veins that carry blood to the heart or out to the body
Congenital heart defects change the normal flow of blood through the heart.
There are many types of congenital heart defects. They range from simple defects with no symptoms to complex defects with severe, life-threatening symptoms.
Congenital heart defects are the most common type of birth defect. They affect 8 of every 1,000 newborns. Each year, more than 35,000 babies in the United States are born with congenital heart defects.
Many of these defects are simple conditions. They need no treatment or are easily fixed. Some babies are born with complex congenital heart defects. These defects require special medical care soon after birth.
The diagnosis and treatment of complex heart defects has greatly improved over the past few decades. As a result, almost all children who have complex heart defects survive to adulthood and can live active, productive lives.
Most people who have complex heart defects continue to need special heart care throughout their lives. They may need to pay special attention to how their condition affects issues such as health insurance, employment, birth control and pregnancy, and other health issues.
In the United States, more than 1 million adults are living with congenital heart defects.
How the Heart Works
To understand congenital heart defects, it's helpful to know how a normal heart works. Your child's heart is a muscle about the size of his or her fist. It works like a pump and beats 100,000 times a day.
The heart has two sides, separated by an inner wall called the septum. The right side of the heart pumps blood to the lungs to pick up oxygen. The left side of the heart receives the oxygen-rich blood from the lungs and pumps it to the body.
The heart has four chambers and four valves and is connected to various blood vessels. Veins are the blood vessels that carry blood from the body to the heart. Arteries are the blood vessels that carry blood away from the heart to the body.
The heart has four chambers or "rooms."
- The atria (AY-tree-uh) are the two upper chambers that collect blood as it comes into the heart.
- The ventricles (VEN-trih-kuls) are the two lower chambers that pump blood out of the heart to the lungs or other parts of the body.
Four valves control the flow of blood from the atria to the ventricles and from the ventricles into the two large arteries connected to the heart.
- The tricuspid (tri-CUSS-pid) valve is in the right side of the heart, between the right atrium and the right ventricle.
- The pulmonary (PULL-mun-ary) valve is in the right side of the heart, between the right ventricle and the entrance to the pulmonary artery, which carries blood to the lungs.
- The mitral (MI-trul) valve is in the left side of the heart, between the left atrium and the left ventricle.
- The aortic (ay-OR-tik) valve is in the left side of the heart, between the left ventricle and the entrance to the aorta, the artery that carries blood to the body.
Valves are like doors that open and close. They open to allow blood to flow through to the next chamber or to one of the arteries, and then they shut to keep blood from flowing backward.
When the heart's valves open and close, they make a "lub-DUB" sound that a doctor can hear using a stethoscope.
- The first sound - the “lub” - is made by the mitral and tricuspid valves closing at the beginning of systole (SIS-toe-lee). Systole is when the ventricles contract, or squeeze, and pump blood out of the heart.
- The second sound - the “DUB” - is made by the aortic and pulmonary valves closing at beginning of diastole (di-AS-toe-lee). Diastole is when the ventricles relax and fill with blood pumped into them by the atria.
The arteries are major blood vessels connected to your heart.
- The pulmonary artery carries blood pumped from the right side of the heart to the lungs to pick up a fresh supply of oxygen.
- The aorta is the main artery that carries oxygen-rich blood pumped from the left side of the heart out to the body.
- The coronary arteries are the other important arteries attached to the heart. They carry oxygen-rich blood from the aorta to the heart muscle, which must have its own blood supply to function.
The veins are also major blood vessels connected to your heart.
- The pulmonary veins carry oxygen-rich blood from the lungs to the left side of the heart so it can be pumped out to the body.
- The vena cava is a large vein that carries oxygen-poor blood from the body back to the heart.
Types of Congenital Heart Defects
With congenital heart defects, some part of the heart doesn’t form properly before birth. This changes the normal flow of blood through the heart.
There are many types of congenital heart defects. Some are simple, such as a hole in the septum. The hole allows blood from the left and right sides of the heart to mix. Another example of a simple defect is a narrowed valve that blocks blood flow to the lungs or other parts of the body.
Other heart defects are more complex. They include combinations of simple defects, problems with the location of blood vessels leading to and from the heart, and more serious problems with how the heart develops.
Examples of Simple Congenital Heart Defects
Holes in the Heart (Septal Defects)
The septum is the wall that separates the chambers on left and right sides of the heart. The wall prevents blood from mixing between the two sides of the heart. Some babies are born with holes in the septum. These holes allow blood to mix between the two sides of the heart.
Atrial septal defect (ASD). An ASD is a hole in the part of the septum that separates the atria—the upper chambers of the heart. The hole allows oxygen-rich blood from the left atrium to flow into the right atrium, instead of flowing into the left ventricle as it should. Many children who have ASDs have few, if any, symptoms.
ASDs can be small, medium, or large. Small ASDs allow only a little blood to leak from one atrium to the other. They don't affect how the heart works and don't need any special treatment. Many small ASDs close on their own as the heart grows during childhood.
Medium and large ASDs allow more blood to leak from one atrium to the other. They’re less likely to close on their own.
About half of all ASDs close on their own over time. Medium and large ASDs that need treatment can be repaired using a catheter procedure or open-heart surgery.
Ventricular septal defect (VSD). A VSD is a hole in the part of the septum that separates the ventricles—the lower chambers of the heart. The hole allows oxygen-rich blood to flow from the left ventricle into the right ventricle, instead of flowing into the aorta and out to the body as it should.
VSDs can be small, medium, or large. Small VSDs don't cause problems and may close on their own. Medium VSDs are less likely to close on their own and may require treatment.
Large VSDs allow a lot of blood to flow from the left ventricle to the right ventricle. As a result, the left side of the heart must work harder than normal. Extra blood flow increases blood pressure in the right side of the heart and the lungs.
The heart’s extra workload can cause heart failure and poor growth. If the hole isn't closed, high blood pressure can scar the arteries in the lungs.
Doctors use open-heart surgery to repair VSDs.
Simple congenital heart defects also can involve the heart's valves. These valves control the flow of blood from the atria to the ventricles and from the ventricles into the two large arteries connected to the heart (the aorta and the pulmonary artery).
Valves can have the following types of defects:
- Stenosis (steh-NO-sis). This defect occurs if the flaps of a valve thicken, stiffen, or fuse together. As a result, the valve cannot fully open. Thus, the heart has to work harder to pump blood through the valve.
- Atresia (ah-TRE-ze-AH). This defect occurs if a valve doesn't form correctly and lacks a hole for blood to pass through. Atresia of a valve generally results in more complex congenital heart disease.
- Regurgitation (re-GUR-jih-TA-shun). This defect occurs if a valve doesn't close tightly. As a result, blood leaks back through the valve.
The most common valve defect is pulmonary valve stenosis, which is a narrowing of the pulmonary valve. This valve allows blood to flow from the right ventricle into the pulmonary artery. The blood then travels to the lungs to pick up oxygen.
Pulmonary valve stenosis can range from mild to severe. Most children who have this defect have no signs or symptoms other than a heart murmur. Treatment isn't needed if the stenosis is mild.
In babies who have severe pulmonary valve stenosis, the right ventricle can get very overworked trying to pump blood to the pulmonary artery. These infants may have signs and symptoms such as rapid or heavy breathing, fatigue (tiredness), and poor feeding. Older children who have severe pulmonary valve stenosis may have symptoms such as fatigue while exercising.
Some babies may have pulmonary valve stenosis and PDA or ASDs. If this happens, oxygen-poor blood can flow from the right side of the heart to the left side. This can cause cyanosis (si-ah-NO-sis). Cyanosis is a bluish tint to the skin, lips, and fingernails. It occurs because the oxygen level in the blood leaving the heart is below normal.
Severe pulmonary valve stenosis is treated with a catheter procedure.
Example of a Complex Congenital Heart Defect
Complex congenital heart defects need to be repaired with surgery. Advances in treatment now allow doctors to successfully repair even very complex congenital heart defects.
The most common complex heart defect is tetralogy of Fallot (teh-TRAL-o-je of fah-LO), which is a combination of four defects:
- Pulmonary valve stenosis.
- A large VSD.
- An overriding aorta. In this defect, the aorta is located between the left and right ventricles, directly over the VSD. As a result, oxygen-poor blood from the right ventricle can flow directly into the aorta instead of into the pulmonary artery.
- Right ventricular hypertrophy (hi-PER-tro-fe). In this defect, the muscle of the right ventricle is thicker than usual because it has to work harder than normal.
In tetralogy of Fallot, not enough blood is able to reach the lungs to get oxygen, and oxygen-poor blood flows to the body.
Babies and children who have tetralogy of Fallot have episodes of cyanosis, which can be severe. In the past, when this condition wasn't treated in infancy, older children would get very tired during exercise and might faint. Tetralogy of Fallot is repaired in infancy now to prevent these problems.
Tetralogy of Fallot must be repaired with open-heart surgery, either soon after birth or later in infancy. The timing of the surgery will depend on how narrow the pulmonary artery is.
Children who have had this heart defect repaired need lifelong medical care from a specialist to make sure they stay as healthy as possible. | <urn:uuid:25a86575-3f27-4c21-b72a-2ad677d04d9d> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.healthynj.org/diseases/congenitalheartdefects.html",
"date": "2016-09-26T22:24:46",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9255202412605286,
"token_count": 2648,
"score": 3.5625,
"int_score": 4
} |
Echinacea Side Effects
Learn some of the common problems that might occur when taking Echinacea.
Some who have jumped on the Echinacea band wagon have touted the success of this traditional herbal remedy—unless they encounter some Echinacea side effects. These include a host of issues that most people don’t experience, but on occasion, a small percentage of individuals do. Learn who’s at risk and what you should look for in terms of Echinacea's side effects.
Using Echinacea to treat various ailments is documented in Native American tribes, who used the native wildflower as a remedy for many problems. The Native Americans passed the Echinacea herbal tradition to the pioneers, and eventually Echinacea was carried back to Europe. The biggest increase in Echinacea use came after German scientists did extensive studies with it in the 1920s.
Today Echinacea is used for its possible ability to treat colds and flu. Herbalists and some scientific studies report that Echinacea might help in preventing a cold or flu and might also help alleviate cold or flu symptoms more quickly. Some herbal remedy followers also use Echinacea as a wound wash for its purported ability to eliminate infection. Others have said that Echinacea might potentially treat strep throat or might operate as a general immune booster during flu season.
Before you begin taking Echinacea or any other herbal remedy, discuss your plans with your doctor.
No matter how you use Echinacea, it’s wise to understand the types of side effects you might experience when consuming it. The most important thing to know is that if you are allergic to plants in the daisy family—including sunflowers, marigolds and ragweed—you’ll likely show side effects when taking Echinacea. If you have hay fever each year, you’ll probably experience some Echinacea side effects. In these cases, the side effects are usually itchy eyes and a scratchy throat.
Others at risk for Echinacea side effects include individuals with asthma, auto-immune disorders, HIV or tuberculosis and those on immune-suppressing drugs. One of the most common Echinacea side effects is nausea or stomach upset. This may occur with or without food on the stomach when taking Echinacea. Dizziness is another Echinacea side effect, along with rashes or swelling. If your body starts to itch or swell after taking Echinacea, observe the reaction carefully. An abrupt increase in these symptoms could require a trip to your local doctor or emergency care clinic.
If while consuming Echinacea you experience any difficulty breathing, including wheezing, coughing or chest tightness, you should stop taking Echinacea immediately and call your doctor. Long-term use of Echinacea, beyond six to eight weeks, has been said to depress the immune system.
If you experience any of these side effects when taking Echinacea, it’s wise to stop taking the herb and call your doctor.
Editor's Note: This article is not intended as medical advice. Always consult a professional healthcare provider before trying any form of therapy or if you have any questions or concerns about a medical condition. The use of natural products can be toxic if misused, and even when suitably used, certain individuals could have adverse reactions. | <urn:uuid:5c6ef634-40d2-4bb3-bcc0-8ace57394c78> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.hgtv.com/outdoors/flowers-and-plants/echinacea-side-effects",
"date": "2016-09-26T22:33:08",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9336801767349243,
"token_count": 685,
"score": 2.734375,
"int_score": 3
} |
Holistic Health News
Holistic and natural health care information for people and their pets
Who can resist chocolate? Like it your not, your dog. Chocolate is made with cocoa beans and cocoa beans contain a chemical called Theobromine, which is toxic to dogs. So on Valentine's Day, you're actually being kind to your best buddy if you eat all the chocolates yourself! Read my special report on chocolate at http://www.great-dog-gift.com/chocolate to learn more, and see how different types of chocolate have varying effects on dogs health.
Cocoa bean shells are a by-product of chocolate production (which is how mulch made it into the "foods" category) and are popular as mulch for landscaping. Homeowners like the attractive color and scent, and the fact that the mulch breaks down into an organic fertilizer. However, some dogs like to eat it and it contains Theobromine.
Fatty foods are hard for a dog to digest and can can overtax the pancreas, leading to pancreatitis. This can threaten your dogs health and is potentially fatal.
Macadamia nuts should be avoided. In fact most nuts are not good for a dogs health since their high phosporus content is said to lead to bladder stones.
Mulch isn't food, but there's one type tempting enough for dogs to eat. Some dogs are attracted to cocoa mulch, and will eat it in varying quantities. The coca bean shells can contain from 0.2% to 3% theobromine (the toxin ) as compaired to 1-4% in unprocessed beans.
Onions, especially raw onions, have been shown to trigger hemolytic anemia in dogs. (Stephen J Ettinger, D.V.M and Edward C. Fieldman, D.V.M. 's book: Textbook of Veterinary Internal Medicine vol. 2 pg 1884.) Stay away from onion powder too.
Potato poisonings among people and dogs are rare but have occurred. The toxin, solanine, is poorly absorbed and is only found in green sprouts (these occur in tubers exposed to sunlight) and green potato skins. This explains why incidents seldom occur. Note that cooked, mashed potatoes are fine for a dogs health, actually quite nutritious and digestible.
Xylitol is used as a sweetener in many products, especially sugarless gum and candies. Ingesting large amounts of products sweetened with xylitol may cause a sudden drop in blood sugar in dogs, resulting depression, loss of coordination, and seizures. According to Dr. Eric K. Dunayer, a consulting veterinarian in clinical toxicology for the poison control center, "These signs can develop quite rapidly, at times less than 30 minutes after ingestion of the product" states Dr. Dunayer, "...therefore, it is important that pet owners seek veterinary treatment immediately."
Turkey skin is currently thought to cause acute pancreatis in dogs, partly due to it's high fat content.
Thanks to a more educated public, fewer fatalities from foods like chocolate are being reported these days. But it's important to keep up with what's currently known about foods and their effects on dogs health. Grapes and cocoa mulch, for example, were only discovered very recently to have harmful effects. Check frequently with sources like the ASPCA and your veterinarian.
Of course, being alert and getting your pet to the vet promptly will help assure a happy outcome if something unfortunate should happen. Here's to your dogs health and good nutrition! | <urn:uuid:b10c6a47-65d6-4c26-a71b-58298faf209f> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.hhnews.com/pet_foodavoid-more.htm",
"date": "2016-09-26T22:22:03",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9564782381057739,
"token_count": 744,
"score": 2.6875,
"int_score": 3
} |
Columbia in Howard County, Maryland — The American Northeast (Mid-Atlantic)
The Pratt Through-Truss Bridge
Patuxent Branch Trail
In 1844, Caleb and Thomas Pratt developed a bridge that was built with wood and diagonal iron rods. They patented the design, which was made up of sections called trusses. Soon, they built the bridge entirely of iron. This bridge had the advantage of low-cost construction, because the iron parts were made in shops, the parts were easily transported to the site, and the bridge could be quickly erected by semi-skilled labor. It was so popular, it became the standard American truss bridge for moderate spans (from 25 feet to 150 feet), well into the 20th century.
The Pratt Bridge in Guilford is 83 feet long. The distance between the side trusses is 15 feet, 6 inches. It is a single-span structure designed to carry one set of train rails. Instead of crossing the river at right angles, the bridge has a built-in 35-degree
[Picture of Pratt Bridge in Guilford] Before: The old bridge lies neglected and overgrown with vines. There is no flooring and the beams are covered with graffiti.
[Picture of pin connection] A Bottom-Chord Pin Connection (Photo courtesy of GPI Greenman-Pederson Inc, designers of the bridge renovations)
This bridge was built in 1902 to carry the Patuxent Branch of the Baltimore and Ohio Railroad over the Little Patuxent River. The train carried heavy loads of granite stone from the Guilford quarries until 1925. The bridge abutments are made from this granite, which was used to construct many other bridge abutments and culverts on the B&O lines. After the railroad spur was abondoned, the bridge surface was planked to serve for a time as a local farm road.
Although hundreds of these bridges were built, only a few survive. Several miles downstream, a similar Pratt bridge, known as the Gabbro Bridge, once carried the quarry train over the Middle Patuxent River. It was washed away by floods.
Long abandoned, the Pratt Through-Truss Bridge was rescued by the Howard County Department of Recreation and Parks and adapted to carry the Patuxent Branch Trail across the river. Its reopening was celebrated with a ribbon cutting on November
[Diagram of a typical truss bridge] Diagram of a Typical Truss Bridge (Courtesy of Historic American Engineering Record, National Park Service)
Caleb and Thomas Pratt were father-and-son team from Boston. Caleb was an architect, and his son Thomas, born in 1812, became an engineer. By the age of 12, Thomas was preparing plans in his father's office. When he was 14, he attended Rensselaer Polytechnic Institute in Troy NY and then went to work for the railroads designing bridges and other structures. He designed his first truss bridge in 1842; then he and his father were granted a joint patent in 1844 for the Pratt Truss Bridge. When they made the bridge entirely of iron, the pioneered the age of iron railroad bridges.
[image of reopened bridge] After: Flags and bunting decorate the bridge on opening day of the Patuxent Branch Trail November 2, 2002
A Truss is a structural triangle formed by three pieces of material (usually wood or metal) that are joined together to form a set of trusses called a web. This arrangement provides great strength and is relatively light. The main pieces may be either stiff, heavy struts or thin, flexible rods. How they are arranged determines the type of truss. Many wooden covered bridges were truss bridges.
What is a Through Truss? There are three basic arrangements of trusses, each carrying traffic in a different way:
[image of deck truss] The deck truss is below the travel surface.
[image of pony truss] The pony truss has trusses on the sides of the travel surface, but is not braced at the top.
[image of through truss] The through truss has trusses on the sides, as well as cross bracing on the top and bottom. Traffic travels through it.
(Diagrams of trusses courtesy of the Historic American Engineering Board, National Park Service)
Erected 2003 by Howard County Department of Recreation and Parks.
Location. 39° 9.946′ N, 76° 50.46′ W. Marker is in Columbia, Maryland, in Howard County. Marker can be reached from Old Guilford Road 0 miles from Guilford Road. Click for map. Marker on on the trail near the parking lot, on the north side of the restored bridge of the Little Patuxent River. Marker is in this post office area: Columbia MD 21046, United States of America.
Other nearby markers. At least 8 other markers are within 3 miles of this marker, measured as the crow flies. The Little Patuxent River (a few steps from this marker); The Granite Quarries (a few steps from this marker); The Patuxent Branch of the B&O Railroad (a few steps from this marker); The Town of Guilford (within shouting distance of this marker); Governor Harry R. Hughes (approx. 1.4 miles away); Christ Episcopal Church (approx. 1.5 miles away); This Survey Point (approx. 2 miles away); Carroll Baldwin Memorial Hall (approx. 2.1 miles away). Click for a list of all markers in Columbia.
Categories. • 20th Century • Bridges & Viaducts • Industry & Commerce • Railroads & Streetcars •
Credits. This page originally submitted on , by F. Robby of Baltimore, Maryland. This page has been viewed 11,599 times since then and 1,053 times this year. Photos: 1, 2, 3, 4. submitted on , by F. Robby of Baltimore, Maryland. This page was last revised on June 16, 2016. | <urn:uuid:52b5e932-fb04-45fd-aa09-9dfbbd2641e1> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.hmdb.org/marker.asp?marker=20498",
"date": "2016-09-26T22:28:47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9684538841247559,
"token_count": 1261,
"score": 3.375,
"int_score": 3
} |
Virtues: A fairly indestructible blooming houseplant (or garden plant in USDA Zones 9 and warmer). Its needs differ from average houseplant care because it likes dry air, dry soil and bright light with no direct sun—making it a natural match for interior growing conditions. It also likes to be potbound and will bloom even in a relatively tiny pot.
Common name: Fire lily, bush lily, clivia
Botanical name: Clivia miniata
Flowers: Rounded clusters of red, orange or yellow trumpet-shaped flowers appear on tall, thick stalks in late winter and early spring, lasting for several weeks.
Foliage: Strappy dark green evergreen leaves, 2 to 3 feet long.
Habit: Clumping evergreen perennial that grows from thick rhizomatous roots.
Season: Late winter/early spring, for flowers.
Origin: Native to South Africa.
Cultivation: Grow Clivia miniata in bright light, but not in direct sun. Use well-draining soil and keep it on the dry side, watering the plant thoroughly only when the top inch of soil feels dry. Allow the plant to remain potbound; roots appearing above the soil line are normal. Repot every 3 to 5 years, after the plant blooms, and step up only one pot size. Clivia miniata likes dry air; they do not need to be misted or stood on a tray of damp gravel like many other houseplants do.
Beginning in fall, give the plant a rest by keeping it in a cool room (50˚–65˚F) and watering it only if it begins to wilt, and then only giving it a splash of water to slightly moisten the soil. After a 6- to 8-week rest, move the plant into a warmer room and begin watering more frequently; blooming should soon commence.
USDA Zones 9–11 for outdoor growing; kept elsewhere as a houseplant. In colder zones it can spend the summer outdoors in a shady location, but it should be brought indoors before the first frost. Fertilize monthly from mid-spring until late summer, using a balanced water-soluble fertilizer mixed at half strength.
Propagation of Clivia miniata: growing from seed can be difficult and it will be years before the plant will flower. An easier way to propagate Clivia miniata is by division, which can be done at any time of year. If you notice offsets (small plants growing from the base of the mother plant), simply pull them off, making sure to include some roots, and pot them. Care for them the same way you care for the mother plant.
Learn all about houseplants, houseplant care and get help with houseplant identification with The Houseplant Encyclopedia.
Read about common houseplants and houseplant care in Jim Hole’s What Grows Here: Indoors: Favorite Houseplants for Every Situation.
Choose tropical plants for your Zone 8 or warmer garden, or plants for growing as annuals or tropical houseplants, with The Tropical Look. | <urn:uuid:9889310e-945c-4012-b55e-5bb59f5f964c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.hortmag.com/plants/plants-we-love/clivia-miniata",
"date": "2016-09-26T22:27:42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9032821655273438,
"token_count": 649,
"score": 2.734375,
"int_score": 3
} |
Source: This document taken from the Report of Apollo 204 Review Board
NASA Historical Reference Collection, NASA History Office, NASA Headquarters, Washington, DC.
Spacecraft 012, assigned to Mission AS-204, was built at North American Aviation, Inc., Space and Information Systems Division, Downey, California. Enclosure 1 shows sketches of the complete space vehicle, the spacecraft and the Command Module. Fabrication was begun in August 1964 and the basic structure was completed in September 1965. While the structure was being fabricated, each component of every subsystem was subjected to acceptance tests and subsystems were assembled. During this period a series of Preliminary Design Reviews were held between November 1964 and January 1965. Installation and final assembly of subsytems into the Command Module took place between September 1965 and March 1966. Critical Design Reviews were held during February and March 1966. Checkout of all subsystems was then initiated followed by integrated testing of all spacecraft subsystems. A series of reviews of the spacecraft and checkout was held during the checkout and integrated testing process. A two-phase Customer Acceptance Readiness Review was conducted by NASA at Downey in conjunction with NAA in July and August 1966. After the August review NASA issued a Certificate of Flight Worthiness and authorized the spacecraft to be shipped to the John F. Kennedy Space Center (KSC), Florida. The Certificate included a listing of open items and work to be completed at KSC.
The Command Module was received at KSC on August 26, 1966. It was mated with the Service Module in the altitude chamber at KSC early in September 1966 and alignment, subsystems and system verification tests and functional checks were performed. Many open design change orders were completed and various malfunctions were noted and corrected. The first combined systems tests were begun on September 14 and completed on October 1, 1966. Several malfunctions were noted and correction of some these was deferred to a later date.
A Design Certification Review was held at NASA Headquarters during September and October 1966. This detailed review was conducted by a Board chaired by the Associate Administrator for Manned Space Flight. Board Members were Office of Manned Space Center Directors. This Board issued a Design Certification Document on October 7, 1966 which certified the design as flightworthy, pending satisfactory resolution of listed open items.
After the combined systems tests were completed at KSC in the altitude chamber, the first manned test in this facility was performed. This test was conducted in air at sea level pressure and was made to verify total spacecraft system operation. The test was initiated on October 10 and discontinued on October 11 to replace bent umbilical pins. The test was begun again on October 12 and completed on October 13. On October 14 and 15, an unmanned test was performed at altitude pressures using oxygen to verify spacecraft system operation under these conditions before a manned altitude test was run. The manned test (with the flight crew) was initiated on October 18 but was discontinued after reaching a simulated altitude of 13,000 feet because of the failure of a transistor in one of the inverters in the spacecraft. The inverter was replaced and the test was completed on October 19. A second manned altitude test (with the backup crew) was initiated on October 21 but it was discontinued when a failure occurred in an oxygen system regulator in the spacecraft Environmental Control System. This regulator was removed and found to have a design deficiency. While redesign was being accomplished various spacecraft work items were completed.
On October 27 the Environmental Control Unit was removed and returned to the factory for a design change to the water/glycol evaporator.
During this period a propellant tank had ruptured in the Service Module of Spacecraft 017 at Downey. Therefore, it was decided that the tanks on the Spacecraft 012 Service Module should be checked by special testing at KSC. In order to conduct this testing in parallel with further checking of the Spacecraft 012 Command Module was removed from the altitude chamber. The Service Module was later removed for tests related to the propellant tanks. The Service Module and Command Module were reinstalled in the altitude chamber and ECU was installed. A water/glycol leak developed in the ECU, and it was again returned to the factory for further examination of the leak problem. It was returned on December 14, 1966.
Also, during this period on December 21, 1966 the Apollo Program Director conducted a Recertification Review which closed out the majority of the open items remaining from previous reviews.
After the Command and Service Modules were reinstalled in the altitude chamber and testing in the chamber was resumed. The sea level and unmanned altitude tests were conducted on December 27 and 30.
It should be noted that this final manned test in the altitude chamber was very successful with all spacecraft systems functioning normally. At the post-test debriefing the backup flight crew expressed their satisfaction with the condition and performance of the spacecraft.
It should also be noted that in the altitude chamber tests the Command Module was pressurized with pure oxygen four times at pressures greater than 14.7 psia for a total time of 6 hours and 15 minutes. The total time was about 2 1/2 times longer than the time the Command Module was pressurized with oxygen during the test which was in progress when the accident occurred.
The Command Module was removed from the altitude chamber on January 3, 1967 and the spacecraft was mated to the launch vehicle on January 6 at Launch Complex 34. Various tests and equipment installations and replacements were then performed.
The system was determined to be ready for the initiation of the Plugs-Out Test on January 27, 1967.
Of the many events which took place at KSC subsequent to the arrival of the spacecraft a few stand out as possible indications of deficiencies in the program and some appear to have possible relation to the fire.
The events that possibly may be related to the fire are those concerned with the occasions when water/glycol spillage or leakage from the Environmental Control System was noted. This may be of significance in that water.glycol coming into contact with electrical connectors can cause corrosion of these connectors. Dried water/glycol on wiring insulation leaves a residue which is electrically conductive and combustible. Of the six recorded instances where water/glycol spillage or leakage occurred (a total of 90 ounces leaked or spilled is noted in the records) the records indicate that this resulted in wetting of conductors and wiring on only one occasion. Action was taken to clean the water/glycol from the connectors and wiring on this one occasion. There is no evidence which indicates that damage resulted to the conductors or that faults were produced on connectors due to water/glycol which contributed to the fire. If the cleaning was inadequate, residue would have remained on the wires. Also, undetected wetting could have occurred, which would leave a residue. Small quantities of water/glycol were found in the Command Module after the fire. This, however, could have been due to water/glycol line breakage which is known to have occurred during the fire. And while water/glycol and its residue may have contributed to the spread of the fire there is no positive evidence that residue was related to the ignition of the fire.
The number of open items at the time of shipment of Command Module 012 was not known. There were 113 significant Engineering Orders not accomplished at the time Command Module 012 was delivered to NASA; 623 Engineering Orders were released subsequent to delivery. Of these, 22 were recent releases which were not recorded in configuration records at the time of the accident.
The effort and rework required on Spacecraft 012 at KSC was greater than that experienced on the first manned Gemini spacecraft. However since the Apollo Spacecraft are considerably more complex than Gemini Spacecraft this does not necessarily indicate that the quantity of problems encountered was excessive. There is, however, an inference that the design, qualification and fabrication process may not have been completed adequately prior to shipment to KSC.
Another item should be noted when considering the problems that were found at KSC including some of the problems encountered in the Plugs-Out Test prior to the fire. The prime purpose of all tests conducted prior to launch is to verify and demonstrate that the space vehicle ground support equipment, procedures and personnel are all ready for flight operations. Many of the tests involve a "first time" operation particularly in an overall sense. Therefore, inherent in the verification process is the likelihood that faults will be found in procedures and in equipment. This Plugs-Out Test had not been classified as hazardous because only those tests involving fueled vehicles, hypergolic propellants, cryogenic systems, high pressure tanks, live pyrotechnics or altitude chamber tests were routinely classified as hazardous.
Updated February 3, 2003
Steve Garber, NASA History Web Curator
For further information E-mail [email protected] | <urn:uuid:dcbe014c-18b2-419c-a6ff-61fb49fa4fd6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.hq.nasa.gov/office/pao/History/Apollo204/history.html",
"date": "2016-09-26T22:27:43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9828279614448547,
"token_count": 1805,
"score": 2.609375,
"int_score": 3
} |
Nasa has unveiled an amazing simulation of the wet, warm planet that Mars used to be - before it all went wrong.
The artists' impressions video captures the look and feel of the planet, with rolling white clouds and lakes of liquid water on the surface.
Mars today has no liquid water on the surface, due to low atmospheric pressure and its cold temperature.
But around four billion years ago it was a different story. Nasa's probes on the surface have so far turned up evidence for a world rich with the conditions in which life could emerge.
Joseph Grebowsky of Nasa's Goddard Space Flight Centre said:
"There are characteristic dendritic structured channels that, like on Earth, are consistent with surface erosion by water flows. The interiors of some impact craters have basins suggesting crater lakes, with many showing connecting channels consistent with water flows into and out of the crater. Small impact craters have been removed with time and larger craters show signs of erosion by water before 3.7 billion years ago. And sedimentary layering is seen on valley walls. Minerals are present on the surface that can only be produced in the presence of liquid water, e.g., hematite and clays."
The video - released to promote the work of its new probe Maven, which will be launched in November and arrive in orbit around Mars in 2014 - shows the transition to the dusty, red world we know today.
There are several theories about how Mars was stripped of its atmosphere. They include the possibility of large-scale asteroid strikes, and the loss of its "intrinsic magnetic field" which protected it from erosion by the solar wind.
Suggested For You
SUBSCRIBE AND FOLLOW
Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements.Learn more | <urn:uuid:4fcc3e21-0650-43b6-9dd4-eeb0c3895ce7> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.huffingtonpost.co.uk/2013/11/13/nasa-mars-four-billion-years_n_4266590.html?ir=Technology",
"date": "2016-09-26T22:36:04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9580983519554138,
"token_count": 380,
"score": 3.859375,
"int_score": 4
} |
social dimensions of climate change
Are you interested in climate change? would like to win a trip to Washington, DC? and think you got what it takes to make a micro-documentary film? Then, The World Bank’s Social Development Department’s world-wide documentary competition “Vulnerability Exposed: Social Dimensions of Climate Change” is waiting for your participation.
There are two award categories: 1) Social Dimensions of Climate Change Award (general category) and 2) Young Voices of Climate Change Award (youth category). The general category is open to everyone; the youth category is open to entries submitted by filmmakers who are under 24 years old. Award winners will be chosen through a combination of public voting and a judging panel.
There is no denying that global climate change is happening now. Developing countries and particularly the world’s poorest people are affected first and worst by changes of climate and extreme weather events such as floods, droughts, heat waves, and rising sea levels. The World Bank’s Social Development Department is looking for submissions 2-5 minute documentaries which creatively showcase the social implications of climate change in the areas of conflict, migration, urban space, rural institutions, drylands, social policy, indigenous peoples, gender, governance, forests and/or human rights.
For more information click here. | <urn:uuid:5abfce2c-e7b0-4f34-a051-273a15da9f59> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.i-genius.org/vulnerability-exposed",
"date": "2016-09-26T22:25:33",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9232297539710999,
"token_count": 275,
"score": 2.65625,
"int_score": 3
} |
A gigantic ice shelf, as large as the city of Hamburg in Germany, broke away from the Pine Island Glacier in the Antarctic on Monday and is now floating in the form of a huge iceberg in the Amundsen Sea, an arm of the Southern Ocean off Marie Byrd Land in western Antarctica.
It was NASA, which, in October 2011, first discovered the evidence of the Pine Island Glacier ice shelf beginning to break apart. At that time, the crack that cut across the floating ice shelf was about 24-kilometers-long and 50-meters-wide. A second crack was spotted in May 2012, producing an initial 30-square-kilometer-size iceberg.
“As a result of these cracks, one giant iceberg broke away from the glacier tongue. It measures 720 square kilometers and is therefore almost as large as the city of Hamburg," Professor Angelika Humbert, ice researcher at the Alfred Wegener Institute in Bremerhaven, Germany, said in a statement, on Tuesday.
Humbert and other researchers, who were tracking the changes in the cracks using the German Space Agency's TerraSAR-X satellite, documented the area of the ice shelf in multiple images to solve the mystery of the ice calving and better understand the physical processes behind the glacier's movements.
Researchers said that the large crack on the Pine Island glacier, which extended initially to a length of 28 kilometers, widened gradually to measure around 540 meters at its widest point just before the iceberg’s birth.
Humbert said that climate change has very little impact on ice breaks and the formation of new icebergs. According to her, the Pine Island Glacier, which is the fastest-flowing glacier in western Antarctica, with a flow speed of around 4 kilometers per year, gets its speed from changing wind directions on the Amundsen Sea, rather than from rising air temperatures.
“The wind now brings warm sea water beneath the shelf ice. Over time, this process means that the shelf ice melts from below, primarily at the so-called grounding line, the critical transition to the land ice,” Humbert said.
A recent study revealed that warm ocean currents melted 55 percent of all Antarctic ice shelves from the bottom between 2003 and 2008. In total, Antarctic ice shelves lost 2,921 trillion pounds (1,325 trillion kilograms) of ice each year between 2003 and 2008 because of ice melting from the bottom.
If the flow of the Pine Island Glacier speeds up, it could cause serious consequences to the western Antarctic ice shelf, which, if it flows into the ocean, could lead to a global rise in sea level of around 3.3 meters, Humbert added. | <urn:uuid:c00964c6-1c5a-4df4-9b15-13ea1d3dd6d8> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ibtimes.com/massive-iceberg-big-city-hamburg-breaks-antarctic-glacier-float-amundsen-sea-1341337",
"date": "2016-09-26T22:50:06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9427432417869568,
"token_count": 551,
"score": 3.828125,
"int_score": 4
} |
How important is the human exploration of space? One argument take you're unlikely to hear in most debates over the wisdom of going to the stars involves a calculation by Dr. J. Richard Gott, a Louisville native, Princeton Astrophysicist and speaker at the IdeaFestival this September.
In 1993 he used the Copernican Principle to assess the odds for human survival and came up with the near certainty, statistically speaking, that humanity would go on for at least another 5,100 years.
The Copernican principle makes reasonable guesses about the future using one known fact and the assumption that there is nothing special about this moment in time. In 1969, Gott used the principle to accurately predict, for example, how long the Berlin Wall would stand.
John Tierney's New York Times article, "A Survival Imperative for Space Colonization" elaborates:
Suppose you want to forecast the political longevity of the leader of a foreign country, and you know nothing about her country except that she has just finished her 39th week in power. What are the odds that she’ll leave office in her 40th week? According to the Copernican Principle, there’s nothing special about this week, so there’s only a 1-in-40 chance, or 2.5 percent, that she’s now in the final week of her tenure.
It’s equally unlikely that she’s still at the very beginning of her tenure. If she were just completing the first 2.5 percent of her time in power, that would mean her remaining time would be 39 times as long as the period she’s already served — 1,521 more weeks (a little more than 29 years).
So you can now confidently forecast that she will stay in power at least one more week but not as long as 1,521 weeks. The odds of your being wrong are 2.5 percent on the short end and 2.5 percent on the long end — a total of just 5 percent, which means that your forecast has an expected accuracy of 95 percent, the scientific standard for statistical significance.
The "Space Colonization Imperative" suggests that humans should have a space colony up and running on Mars in the next 45 years, since, applying the Copernican principle, the space program is half way through its expected life span. If we don't have a permanent base on Mars by then, it might be too late.
His column also drew a response from one of my favorite writers on big space themes, Paul Gilster. His "Odds on a Human Future" post also describes Dr. Gott's thinking on the matter. | <urn:uuid:28cf4002-7cbc-402c-8886-5da7b5adfd26> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ideafestival.com/if-blog/1199",
"date": "2016-09-26T22:26:57",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.961547315120697,
"token_count": 548,
"score": 3.40625,
"int_score": 3
} |
Teething proves to be such an uncomfortable and distressing time for babies. This process typically begins around 6 months of age, but it can also start as early as 3 months or as late as 14 months as well. One of the hardest things about babies’ teething is the fact that they experience it differently and with varying levels of pain and stress. Therefore, it’s important that, as a parent, you should be able to recognize when your baby’s teeth start making its presence felt, in order to help ease their discomfort.
Do take note that the lower front teeth are usually the ones to appear first. The upper front teeth normally follow about 2 months after that, and within the next few months, the lower and upper lateral incisors, first molars, canines, and second molars also grow out as well.
The most noticeable symptoms and signs of teething are baby’s restlessness and fussiness due to the soreness and swelling of the gums. Because of this, there’s a growing need for babies to start gnawing on something that will provide a counter pressure relief against the tooth pushing out of their gums. Upon close inspection of the inside of their mouth, you’ll also find puffy and bulging gums, and you might even see the tooth underneath as well. Other behaviors related to teething include excessive drooling, grabbing of the ears, loss or drastic change in appetite, and irritability.
While some babies don’t seem to be bothered by teething, some experience extreme pain and discomfort. One of the most common remedies done to help ease the baby’s discomfort is by using a clean finger or wrap it with a wet and frozen washcloth, and then proceed to gently rub a certain area of the gums for about 2 minutes. In order to provide a fun distraction for them during teething, you can also get them a toy teether—which are available here at Ideal Baby—that they can use to gnaw and bite on. | <urn:uuid:8f44c480-91c7-48a6-b1c4-995c88ebc52c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.idealbaby.com/Basic-Symptoms-Remedies-for-Teething-Babies_b_84.html",
"date": "2016-09-26T22:22:47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9620396494865417,
"token_count": 427,
"score": 2.765625,
"int_score": 3
} |
The Historical Roots of the Modern Crisis in the Catholic Church
Drawing on his years of experience as a Catholic writer, Philip Trower offers a long view of how the Catholic Church arrived in its present modern crisis. Whereas many analyses take the Second Vatican Council as their starting point, Trower turns his gaze back towards the previous centuries, searching out the roots of modern conflicts over authority within the Church, the nature of Scripture, the relationship with the secular world, and more.
His central thesis is that the positive movement for reform, and the negative movements of rebellion against the Churchs authority, grew up intertwined in the years preceding Vatican II, and that it was only in the period following the Council that the division between the two became clearer. His analysis introduces a host of persons and movements whose legacies endure.
Philip Trowers accessible style of writing and his attention to detail offer the reader a clear understanding of where the Church has come from in its recent past. Turmoil and Truth is essential reading for all who wish to understand the present and future direction of the Catholic Church.
The most comprehensive and penetrating account we have of the post-conciliar crisis. James Hitchcock, St. Louis University
Chapter Seventeen: Enter Modernism
The Bible, the Word of God in human speech, is not like a manual of instructions - though it has often been treated like that. While most of it is straightforward enough, there are also many passages whose meaning is far from immediately self-evident. This is why Bible study has a history going back to Old Testament times.
The obscurities are basically of three kinds.
The first are due to mistakes by copyists. In the transmission of the manuscripts down the ages, the attention of the copyists sometimes wandered, or they added comments in the margin which later became incorporated in the text. As a result, the surviving manuscripts contain numbers of variant readings. The kind of scholarship that tries to determine which of these different readings comes nearest to the original is called textual criticism. It is largely a matter of comparing manuscripts to determine which seems most reliable.
It is not difficult, I think, to see why God, in his providence, allowed the texts to become corrupted in this way. Had he prevented it, had he ensured that the thousands of copyists working over two to three millennia had never made a mistake, the Bible would so obviously be a work of divine origin that faith would no longer be a free act. The variant readings are never sufficient to make the main substance of the biblical books uncertain. They only affect particular sentences or phrases.
Obscurities of the second kind flow from the human limitations and character traits of the inspired human authors. While ensuring that they wrote what he wanted, God did so through the medium of their particular personalities and styles of writing and the kinds of literary composition characteristic of their age. Since they were writing a long time ago, they, not surprisingly, used modes of expression or referred to events and things some-times beyond the comprehension of later readers.
Difficulties arising from this second class of causes are resolved, in so far as they can be, by the study of ancient languages, history, archaeology, and literary forms or genres (not to be confused with "form criticism"). Are some words to be taken literally or metaphorically? Is a certain book or passage intended to be history in the strict sense, or an allegory or parable, or is it some combination of the two? The search is for what the human author intended to say and how. This is called "the literal sense".
These first two forms of Bible study simply prepare the ground for what in the Church's eyes has always been the most important branch; the study of the religious significance or theological meaning of the texts.
Obscurities in this field are due to the mysterious nature of the subject matter, or, according to St. Augustine, are deliberately put there by the divine author himself. "The Sacred Books inspired by God were purposely interspersed by him with difficulties both to stimulate us to study and examine them with close attention, and also to give us a salutary experience of the limitations of our minds and thus exercise us in proper humility". God does not disclose the full meaning of what he is saying to mere cleverness or sharp wits.
Most of the problems connected with these three branches of Bible study were familiar to the scholars of the ancient world, with the school of Antioch concentrating on the literal meaning and those of Alexandria on possible symbolic or "spiritual" meanings. The critical approach was not unknown either Origen and St. Jerome, for instance, on the basis of internal evidence, doubted whether the Epistle to the Hebrews was really by St. Paul. But whatever the problems, down to 200 years ago the end in view was always the same: to strengthen belief, deepen understanding and increase love of God.
Since around 1800, on the other hand, "advanced" biblical scholarship has followed a markedly different course with the precisely opposite results. The critical method has been given pride of place over every other approach; attention has focused on technical rather than spiritual questions (when and in what circumstances were the books written), with a high percentage of those trying to answer the questions losing most of their beliefs in the process. This is a plain historical fact which receives surprisingly little attention. Does it mean that the Bible cannot stand up to close examination? No. We have to distinguish between the method and the spirit in which it is used, or between the critical method and the critical movement.
That the critical method, once formulated, would be applied to the Bible was more or less bound to happen, but it was clearly a much more sensitive business than applying it to other historical documents, seeing that implicit in its use was the assumption that the origin of at least some of the books would turn out not to be what had hitherto been thought.
The method also carries with it a number of temptations. Experts like to exercise their skills. But if a text is the work of a single author, without additions or interpolations and written when it was thought to have been, there is nothing for the critic to do. The method, of its nature, therefore carries within it a kind of bias against single authorship. There will be a tendency to see any ancient text as necessarily a patchwork of literary fragments put together by groups of editors at some considerable time after the events described which is different from recognizing, as has always been done, that the biblical authors, like other writers about past events, when not writing about events they had themselves taken part in, depended on external sources. We can see the tendency at work in 19th-century Homeric studies, where it came to be more or less taken for granted that any work before the fifth or sixth century A.D. must be of composite authorship. Homer's very existence was doubted, and the authorship of the Iliad and Odyssey assigned to a mob of Greek poets spanning several centuries. Since then Homeric studies have changed course. A real Homer is credited with the bulk of the epics. But there has been no such change of course in advanced biblical scholarship.
Another temptation will be to try to ape the exact sciences by assigning a certainty to conclusions, which, because of the nature of the subject matter, can only be conjectural. Nevertheless, as we have already said, there is nothing objectionable about the method itself. The Church has approved it, and its use by biblical scholars with faith and a sense of proportion has thrown light on numbers of incidental scriptural obscurities.
The critical movement is another matter. Although forerunners like the 17th-century French Oratorian priest Richard Simon and the 18th-century French physician Jean Astruc were Catholics, we can take as the movement's starting point the publication of The Wo!ffenbuttel Fragments (1774-1778) by the German Lutheran dramatist and writer Lessing. The "fragments" were actually extracts from an unpublished manuscript by the rationalist scholar Reimarus, which Lessing pretended he had found in the royal Hanoverian library at Wolffenbuttel. A few years later, Gottfried Eichorn, the Lutheran professor of oriental languages at Jena (and subsequently Gottingen) published his Introductions to the Old and New Testaments (1780-1783 and 1804-1812), and from then on the movement was dominated by scholars whose conclusions about the time and the way the biblical books were written were influenced as much by philosophical assumptions and cultural prejudices as by concrete evidence.
Their principal assumption was that supernatural phenomena like miracles and prophecy are impossible, and therefore a large part of the Bible must be folklore. They also tended to see people in the past as necessarily inferior, uninterested in objective truth and incapable of transmitting facts accurately, while regarding priests as by nature deceitful and only interested in the maintenance of their collective authority. Evidence that the art of writing was practised by the Hebrews at least by the time of the Exodus, and of the capacity of non-literate peoples to orally transmit religious traditions faithfully over long periods of time was either downplayed or ignored. These assumptions had in most cases already been made before they set to work.
The Pentateuch and Gospels were the main objects of attention. The crucial question about the composition of the Pentateuch is not "When were the books written or put together in the form we now have them?" but "Was the information they contain, whether recorded by Moses or others, transmitted accurately down the centuries?"
The crucial question about the composition of the Gospels is "Were they, or were they not, written by eye-witnesses, or by men with more or less direct access to eye-witnesses?"
Page 2 | Page 3
Place your order toll-free at 1-800-651-1531
Ignatius Press | P.O. Box 1339 | Ft. Collins, CO 80522
Web design under direction of Ignatius Press.
Send your comments or web problems to:
Copyright © 2016 by Ignatius Press
IgnatiusInsight.com catholic blog books insight scoop weblog ignatius | <urn:uuid:fd39e15e-a124-446d-94fa-a14f0500e36c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ignatiusinsight.com/excerpts/turmoiltruth1.asp",
"date": "2016-09-26T22:27:26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9725201725959778,
"token_count": 2106,
"score": 2.828125,
"int_score": 3
} |
Note: This message is displayed if (1) your browser is not standards-compliant or (2) you have you disabled CSS. Read our Policies for more information.
Located about 10 miles northwest of Madison, the small community of Lancaster, Indiana played an important role in Underground Railroad efforts in southern Indiana. Leaders of the local movement included Lyman Hoyt, Samuel Tibbitts, and James Nelson. Lancaster is also home to the Eleutherian College, which was built in the 1840s to educate men and women regardless of race.
Image: Illustrated Historical Atlas of Indiana (Baskin, Forster and Company, 1876; Reprinted, Indiana Historical Society, 1968)
The Lyman Hoyt house in Lancaster was the home to one of the founders of Eleutherian college and an active participant in the Underground Railroad. Diaries from his children highlight many of his UGRR activities. The Hoyt House is listed on the National Register and the Network to Freedom.
(Read the complete story)
This majestic, limestone building in Lancaster is Eleutherian College. The college was built by anti-slavery Baptists in the 1840s to educated men and women, regardless of race, in the same building--a rarity for the time. This building was also used for UGRR activities. Eleutherian College is a National Historic Landmark, listed on the National Register and the Network to Freedom. | <urn:uuid:09d25ed4-c707-4489-927a-9c6753334b7e> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.in.gov/dnr/historic/4174.htm",
"date": "2016-09-26T22:37:50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.95187908411026,
"token_count": 294,
"score": 3.421875,
"int_score": 3
} |
Son of the
Phoenician king Agenor and brother to Europa.
When Zeus kidnapped her,
Cadmus was told by his father to find her and not return if he didn't.
Cadmus, not being able to find his sister, consulted the oracle at Delphi, who said he must abandon the search and in stead follow a cow and found a city where the animal lay down to rest. Thus, he became the founder of Thebes. He killed a dragon near Thebes and planted its teeth in the ground. Out of the teeth warriors grew, and they fought each other until only five remained. Cadmus made these five warriors, the sparti (the sown), head of Thebes noble families. One of them, Echion, he eventually married to his daughter Agave.
In killing the dragon Cadmus had upset Ares, the dragon's master or father, and had to serve him for eight years to make penance. When he finished, he married Ares and Aphrodite's daughter Harmonia. The couple had four daughters: Autonoe, Ino, Semele and Agave, and a son: Polydoros.
Cadmus ended his days in Illyria after a series of misfortunes, and he and Harmonia were transformed into snakes. It was Cadmus who gave the Greeks the greek alphabet. | <urn:uuid:6fe437ff-3c10-4e15-9ff0-f5cf4b8a8d96> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.in2greece.com/english/historymyth/mythology/names/cadmus.htm",
"date": "2016-09-26T22:25:18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.985834002494812,
"token_count": 287,
"score": 3.3125,
"int_score": 3
} |
on churches, tombstones, and used by pilgrims:
(1) If dedicated to James the Greater, the scallop-shell is his recognised emblem. (See James.) If not, the allusion is to the vocation of the apostles generally, who were fishermen, and Christ said He would make them “fishers of men.”
(2) On tombstones, the allusion is to the earthly body left behind, which is the mere shell of the immortal soul.
(3) Carried by pilgrims, the allusion may possibly be to James the Greater, the patron saint of pilgrims, but more likely it originally arose as a convenient drinking-cup, and hence the pilgrims of Japan carry scallop shells.
Source: Dictionary of Phrase and Fable, E. Cobham Brewer, 1894 | <urn:uuid:7d6f8e46-0c09-421e-8632-bbfdcdef2d51> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.infoplease.com/dictionary/brewers/shells.html",
"date": "2016-09-26T22:50:34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.925575315952301,
"token_count": 172,
"score": 2.546875,
"int_score": 3
} |
IBM is celebrating the 100th anniversary of its founding Thursday. Led by American capitalist icons Thomas J. Watson, Sr., and Thomas J. Watson, Jr., until the 1970s, the company grew from a pre-World War I conglomeration of companies making tabulating machines and time-keeping devices into a globe-spanning technology behemoth that pioneered the development of electronic computers and dominated the mainframe era.
The company holds a mind-boggling array of patents and pioneered advances in a wide range of technologies including punched cards, processors, transistors, storage, word-processing, databases, and OSes. As one of the emblematic 20th-century corporations, IBM also went through turbulent times. The U.S. government brought several antitrust lawsuits against the company, and critics have attacked it for alleged cosiness with repressive regimes. After Watson, Jr., retired in the 1971, the company seemed to lose its way as mainframe computing began to face competition from smaller, more modular systems. Increasing bureaucracy contributed to missteps during the PC revolution, and IBM suffered a series of annual losses in the early 1990s. Under the reins of then-CEO Lou Gerstner, starting in the mid-90s, the company bounced back to profit by focusing on software, system integration, and other services, which remain key to the company's growth today.
[ Slideshow: Highlights of IBM's 100 years. Also on InfoWorld: A look at IBM's new future in quantum computing. | Discover what's new in business applications with InfoWorld's Technology: Applications newsletter. | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ]
Although Hewlett-Packard, after its acquisition of Compaq, overtook IBM as the world's largest computer company by annual revenue, IBM's global reach and broad product portfolio still make it one of the largest and most profitable IT companies in the world, with about 427,000 employees and a profit of $14.8 billion on sales of $99.9 billion last year. The following is a timeline of milestone events in one of the quintessential U.S. corporate success stories.
1889: Time-recording equipment maker Bundy Manufacturing is incorporated.
1896: Punched-card, electric tabulating equipment maker The Tabulating Machine is incorporated.
1911: Incorporation on June 16 of the Computing-Tabulating-Recording Company (C-T-R), which merges Bundy, Tabulating Machine, Computing Scale, and International Time Recording. Headed by trust organizer Charles Flint, the company has 1,300 employees.
1914: Thomas J. Watson, Sr., joins C-T-R at age 40, after learning aggressive sales tactics at the National Cash Register (NCR) that led to his conviction on antitrust charges. The verdict was set aside after an appeal. Within 11 months of joining C-T-R, Watson became its president. His focus on marketing and sales and large-scale tabulating products for businesses helped company revenue more than double in his first four years at C-T-R, to $9 million. Over the next four decades as IBM CEO, Watson, Sr., became an American business icon, pioneering worker benefits such as paid vacations and group insurance while instilling discipline and loyalty in generations of IBM workers.
1923: The first electric key punch is introduced, representing an advance on mechanical systems.
1924: Taking the name from a Canadian affiliate, C-T-R formally becomes International Business Machines.
1928: The 80-column IBM punched card, doubling prior capacity, is unveiled and remains a standard for 50 years.
1931: A watershed year in advances: IBM 400 accounting machines offer alphabetic data, the 600 series calculating machines perform multiplication and division, and the first automatic multiplying punch and reproducing punch machines are introduced.
1933: IBM acquires Electromatic Typewriters, acquiring entry in the typewriter business, which ultimately leads to innovations in word processing.
1936: Watson, Sr.'s, insistence on making machines during the Depression, even when demand dried up, pays off when IBM is in a position to participate in what was then billed as the biggest accounting operation of all time: supplying punched-card equipment to the U.S. government in the wake of the 1935 Social Security Act.
1937: Watson, Sr., is elected president of the International Chamber of Congress, and at a Berlin meeting promotes "World Peace Through Trade," taken on as a slogan by the ICC and IBM. Germany awards him with an Order of the German Eagle medal. He returns the medal in 1940, enraging the fascist government, but IBM's business in Germany in the 1930s stirs criticism over the years.
1944: IBM's first large-scale computer, the Automatic Sequence Controlled Calculator or the Mark I, is the first machine to accomplish long operations automatically, using electromechanical relays.
1946: IBM's 603 Electronic Multiplier is the first commercially available machine to offer electronic arithmetic circuits. It is more than 50 feet long and eight feet high, and weighs almost five tons.
1948: IBM releases the Selective Sequence Electronic Calculator, a large-scale digital calculating machine that uses electromechanical relays and offers for the first time the ability to modify a stored program.
1952: The IBM 701 is IBM's first production electronic computer, featuring tape-drive technology that ultimately led to the ascendance of magnetic tape.
1952: Thomas J. Watson, Jr., becomes IBM's president. He was a force behind the 701, essentially a bet-the-company stance on electronic computers before they became more cost-effective than electromechanical machines, leading the way for IBM to dominate computing for the next few decades during the mainframe era.
1956: A consent decree ends a 1952 U.S. antitrust suit, as IBM adapts a more liberal policy toward licensing equipment.
1956: Watson, Jr., takes over as CEO in May, before the death of his father in June. Watson, Jr., moves to reorganize IBM along divisional lines, based on a "line and staff" concept that is adopted by American business at large.
1957: IBM introduces Fortran, which becomes the main language for technical work and is used to this day.
1961: The Selectric typewriter is released; later models offer memory and give rise to modern word processing.
1964: The IBM System/360 uses solid logic technology (solid state) microelectronics and introduces the concept of a family of computers that share compatible technology, in what was essentially a $5 billion bet on future trends.
1966: IBM's Robert Dennard invents the dynamic random access memory (DRAM) cell, which remains an industry standard.
1969: IBM technology including an onboard computer used in first manned flight to the moon.
1971: Watson, Jr., steps down and is succeeded by Frank Cary. The floppy disk is introduced; it later becomes the PC data storage standard.
1975: The IBM 5100 Portable Computer enters the market, weighing 50 pounds and priced at $9,000 to $20,000.
1981: The IBM Personal Computer becomes the smallest, and -- at $1,565, -- the lowest-priced PC to date. IBM's deal for Microsoft to supply the operating system and allow competitors to buy it for "IBM-compatible" clones fuels a growing industry and paves the way for competitors such as Dell and Compaq.
1982: A U.S. antitrust suit filed in 1969 is dismissed, but arguably pushes IBM to further separate hardware from software, allowing customers to increasingly mix and match products from different companies, a trend that takes off during the PC era.
1984: The Personal Computer/AT, IBM's second-generation PC, runs on a 6MHz Intel 80286 processor.
1987: The IBM Personal System/2 (PS/2) is launched along with the OS/2 operating system, jointly developed by Microsoft and IBM. OS/2 offers multitasking capabilities and in six months 1 million PS/2s are shipped. But although IBM PC chief James Cannavino wants OS/2 to maintain compatibility with the AT going forward, Microsoft CEO Bill Gates wants to move on to machines built around the Intel 80386 chip. Windows 3.0, released in 1990, offers crude multitasking features but makes use of 80386 memory management and becomes a hit, leaving OS/2 in the dust.
1990: IBM releases the System/390 family, comprising midrange machines and supercomputers, calling it the company's biggest product development in 25 years. New technology includes high-speed fiber-optic channels, ultradense circuits, and extended supercomputer capabilities.
1991: As Microsoft and PC clone makers rake in profits, client/server architecture takes off and IBM shocks long-time industry insiders by announcing an annual loss of $2.8 billion, the first of three annual losses in a row. Under CEO John Akers, IBM considers breaking up into smaller, nimbler companies.
1993: Louis Gerstner, former chief executive of RJR Nabisco, takes the reins as chairman and CEO. At his inaugural press conference, Gerstner plainly states his intention to keep IBM together as an integrated company and his belief that there is a need for a broad-based IT company that can serve as both supplier and systems integrator to customers.
1995: IBM acquires Lotus Development and its Notes collaboration software, making IBM the world's largest software company.
1995: IBM introduces the ThinkPad 701cm laptop, which runs on the Intel 133MHz Pentium processor. The sleek black design is a departure for IBM and wins accolades.
1996: IBM's launch of the DB2 Universal Database -- capable of querying alphanumeric data as well as images, audio, and video -- marks IBM's firm embrace of the Internet.
1997: Deep Blue, an IBM RS/6000 SP supercomputer able to calculate 200 million chess positions per second, defeats grandmaster Garry Kasparov.
2001: The publication of Edwin Black's "IBM and the Holocaust" coincides with an Alien Tort Claims Act claim, later dismissed, against IBM for allegedly suppling punched card technology that enabled the Holocaust. IBM's response points out that along with hundreds of foreign-owned companies in Germany at that time, its affiliate came under the control of Nazi authorities before World War II.
2002: Sam Palmisano becomes CEO in March, and in July IBM signals it is further strengthening its services business with a $3.5 billion acquisition of the PricewaterhouseCoopers global business and consulting technology unit.
2005: Although IBM has sold more than 20 million ThinkPads, it announces the sale of its PC business to Lenovo in an effort to further focus on software and services.
2011: Watson, a system comprising 90 IBM Power 750 servers, shows off IBM's artificial intelligence and systems architecture expertise by defeating two "Jeopardy" game show champions in a two-game match.
Sources: IBM; interview with James Birkenstock, interviewed in 1980 by Roger Stuewer and Erwin Tomash for the Charles Babbage Institute; "Big Blues: The Unmaking of IBM," by Paul Carroll (Crown Publishers, 1993); and IDG News Service archives | <urn:uuid:325d6af9-8e11-41a9-8524-3a2e083dc8e6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.infoworld.com/article/2621887/techology-business/100-years-of-ibm--milestones.html?page=2",
"date": "2016-09-26T23:05:39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.942232072353363,
"token_count": 2349,
"score": 2.9375,
"int_score": 3
} |
STORY I. The Prophet and his Infidel Guest.
AFTER the usual address to Husamu-'d-Din follows a comment on the precept
addressed to Abraham, "Take four birds and draw them towards thee, and cut
them in pieces." 1 The birds are explained to be the duck of
gluttony, the cock of concupiscence, the peacock of ambition and ostentation,
and the crow of bad desires, and this is made the text of several stories.
Beginning with gluttony, the poet tells the following story to illustrate the
occasion of the Prophet's uttering the saying, Infidels eat with seven bellies,
but the faithful with one." One day some infidels begged food and lodging
of the Prophet. The Prophet was moved by their entreaties, and desired each of
his disciples to take one of the infidels to his house and feed and lodge him,
remarking that it was their duty to show kindness to strangers at his command,
as much as to do battle with his foes. So each disciple selected one of the infidels
and carried him off to his house; but there was one big and coarse man, a very
giant Og, whom no one would receive, and the Prophet took him to his own house.
In his house the Prophet had seven she-goats to supply his family with milk,
and the hungry infidel devoured all the milk of those seven goats, to say
nothing of bread and other viands. He left not a drop for the Prophet's family,
who were therefore much annoyed with him, and when he retired to his chamber
one of the servant-maids locked him in. During the night the infidel felt very
unwell in consequence of having overeaten himself, and tried to get out into
the open air, but was unable to do so, owing to the door being locked. Finally,
he was very sick, and defiled his bedding. In the morning he was extremely
ashamed, and the moment the door was opened he ran away. The Prophet was aware
of what had happened, but let the man escape, so as not to put him to shame.
After he had gone the servants saw the mess he had made, and informed the
Prophet of it; but the Prophet made light of it, and said he would clean it up
himself. His friends were shocked at the thought of the Prophet soiling his
sacred hands with such filth, and tried to prevent him, but he persisted in
doing it, calling to mind the text, "As thou livest, O Muhammad, they were
bewildered by drunkenness," 2 and being, in fact, urged to it by a
divine command. While he was engaged in the work the infidel came back to look
for a talisman which he had left behind him in his hurry to escape, and seeing
the Prophet's occupation he burst into tears, and bewailed his own filthy
conduct. The Prophet consoled him, saying that weeping and penitence would
purge the offence, for God says, "Little let them laugh, and much let them
weep;" 3 and again, "Lend God a liberal loan;" 4
and again, "God only desireth to put away filthiness from you as His
household, and with cleansing to cleanse you." 5 Prophet then
urged him to bear witness that God was the Lord, even as was done by the sons
of Adam, 6 explained how the outward acts of prayer and fasting bear
witness of the spiritual light within. After being nurtured on this spiritual
food the infidel confessed the truth of Islam, and renounced his infidelity and
gluttony. He returned thanks to the Prophet for bringing him to the knowledge
of the true faith and regenerating him, even as 'Isa had regenerated Lazarus.
The Prophet was satisfied of his sincerity, and asked him to sup with him
again. At supper he drank only half the portion of milk yielded by one goat,
and steadfastly refused to take more, saying he felt perfectly satisfied with
the little he had already taken. The other guests marveled much to see his
gluttony so soon cured, and were led to reflect on the virtues of the spiritual
food administered to him by the Prophet.
Outward acts bear witness of the state of the heart within.
Prayer and fasting and pilgrimage and holy war
Bear witness of the faith of the heart.
Giving alms and offerings and quitting avarice
Also bear witness of the secret thoughts.
So, a table spread for guests serves as a plain sign,
Saying, "O guest, I am your sincere well-wisher."
So, offerings and presents and oblations
Bear witness, saying, "I am well pleased with you."
Each of these men lavishes his wealth or pains,
What means it but to say, "I have a virtue within me,
Yea, a virtue of piety or liberality,
Whereof my oblations and fasting bear witness"?
Fasting proclaims that he abstains from lawful food,
And that therefore he doubtless avoids unlawful food.
And his alms say, "He gives away his own goods;
It is therefore plain that he does not rob others."
If he acts thus from fraud, his two witnesses
(Fasting and alms) are rejected in God's court;
If the hunter scatters grain
Not out of mercy, but to catch game;
If the cat keeps fast, and remains still
In fasting only to entrap unwary birds;
Making hundreds of people suspicious,
And giving a bad name to men who fast and are liberal;
Yet the grace of God, despite this fraud,
May ultimately purge him from all this hypocrisy.
Mercy may prevail over vengeance, and give the hypocrite
Such light as is not possessed by the full moon.
God may purge his dealings from that hypocrisy,
And in mercy wash him clean of that defilement.
In order that the pardoning grace of God may be seen,
God pardons all sins that need pardon.
Wherefore God rains down water from the sign Pisces,
To purify the impure from their impurities. 7
Thus acts and words are witnesses of the mind within,
From these two deduce inferences as to the thoughts.
When your vision cannot penetrate within,
Inspect the water voided by the sick man.
Acts and words resemble the sick man's water,
Which serves as evidence to the physician of the body.
But the physician of the spirit penetrates the soul,
And thence ascertains the man's faith.
Such an one needs not the evidence of fair acts and words
"Beware of such, they spy out the heart."
Require this evidence of act and word only from one
Who is not joined to the divine Ocean like a stream.
But the light of the traveler arrived at the goal,
Verily that light fills deserts and wastes.
That witness of his is exempt from bearing witness,
And from all trouble and risk and good works.
Since the brilliance of that jewel beams forth,
It is exempted from these obligations.
Wherefore require not from him act and word evidence,
Because both worlds through him bloom like roses.
What is this evidence but manifestation of hidden things,
Whether it be evidence in word, or deed, or otherwise?
Accidents serve only to manifest the secret essence;
The essential quality abides, and accidents pass away.
This mark of gold endures not the touchstone,
But only the gold itself, genuine and undoubted.
These prayers and holy war and fasting
Will not endure, only the noble soul endures.
The soul exhibits acts and words of this sort,
Then it rubs its substance on the touchstone of God's command,
Saying, "My faith is true, behold my witnesses!"
But witnesses are open to suspicion.
Know that witnesses must be purified,
And their purification is sincerity, on that you may depend.
The witness of word consists in speaking the truth,
The witness of acts in keeping one's promises.
If the witness of word lie, its evidence is rejected,
And if the witness of act play false, it is rejected.
Your words and acts must be without self-contradiction
In order to be accepted without question.
"Your aims are different," 8 and you contradict yourselves,
You sew by day, and tear to pieces by night.
How can God listen to such contradictory witness,
Unless He be pleased to decide on it in mercy?
Act and word manifest the secret thoughts and mind,
Both of them expose to view the veiled secret.
When your witnesses are purified they are accepted,
Otherwise they are arrested and kept in durance.
They enter into conflict with you, O stiff-necked one;
"Stand aloof and wait for them, for they too wait." 9
Prayers for spiritual enlightenment.
O God, who hast no peer, bestow Thy favor upon me;
Since Thou hast with this discourse put a ring in my ear,
Take me by the ear, and draw me into that holy assembly
Where Thy saints in ecstasy drink of Thy pure wine!
Now that Thou hast caused me to smell its perfume,
Withhold not from me that musky wine, O Lord of faith
Of Thy bounty all partake, both men and women,
Thou art ungrudging in bounties, O Hearer of prayer.
Prayers are granted by Thee before they are uttered,
Thou openest the door to admit hearts every moment!
How many letters Thou writest with Thy Almighty pen!
Through marveling thereat stones become as wax.
Thou writest the Nun of the brow, the Sad of the eye,
And the Jim of the ear, to amaze reason and sense.
These letters exercise and perplex reason;
Write on, O skilful Fair-writer!
Imprinting every moment on Not-being the fair forms
Of the world of ideals, to confound all thought! 10
Yea, copying thereon the fair letters of the page of ideals,
To wit, eye and brow and moustache and mole!
For me, I will be a lover of Not-being, not of existence,
Because the beloved of Not-being is more blessed. 11
God made reason a reader of all these letters,
To suggest to it reflections on that outpouring of grace. 12
Reason, like Gabriel, learns day by day
Its daily portion from the "Indelible Tablet." 13
Behold the letters written without hands on Not-being!
Behold the perplexity of mankind at those letters!
Every one is bewildered by these thoughts,
And digs for hidden treasure in hope to find it.
This bewilderment of mankind as to their true aims is compared to the bewilderment
of men in the dark looking in all directions for the Qibla, and recalls the
text, "O the misery that rests upon my servants." 14
Then follow reflections on the sacrifice by Abraham of the peacock of ambition
and ostentation. Next comes a discourse on the thesis that all men can
recognize the mercies of God and the wrath of God; but God's mercies are often
hidden in His chastisements, and vice versa, and it is only men of deep
spiritual discernment who can recognize acts of mercy and acts of wrath
concealed in their opposites. The object of this concealment is to try and test
men's dispositions; according to the text, "To prove which of you will be
most righteous in deed." 15 | <urn:uuid:22bf5d88-fb5d-4ebe-9dfe-9bc1737e61cb> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.intratext.com/IXT/ENG0134/__P1T.HTM",
"date": "2016-09-26T22:29:28",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9652245044708252,
"token_count": 2515,
"score": 2.796875,
"int_score": 3
} |
18.104.22.168 Dedicated energy crops
The energy production and GHG mitigation potentials of dedicated energy crops depends on availability of land, which must also meet demands for food as well as for nature protection, sustainable management of soils and water reserves, and other sustainability criteria. Because future biomass resource availability for energy and materials depends on these and other factors, an accurate estimate is difficult to obtain. Berndes et al. (2003) in reviewing 17 studies of future biomass availability found no complete integrated assessment and scenario studies. Various studies have arrived at differing figures for the potential contribution of biomass to future global energy supplies, ranging from below 100 EJ/yr to above 400 EJ/yr in 2050. Smeets et al. (2007) indicate that ultimate technical potential for energy cropping on current agricultural land, with projected technological progress in agriculture and livestock, could deliver over 800 EJ/yr without jeopardizing the world’s food supply. In Hoogwijk et al. (2005) and Hoogwijk (2004), the IMAGE 2.2 model was used to analyse biomass production potentials for different SRES scenarios. Biomass production on abandoned agricultural land is calculated at 129 EJ (A2) up to 411 EJ (A1) for 2050 and possibly increasing after that timeframe. 273 EJ (for A1) – 156 EJ (for A2) may be available below US$ 2/GJ production costs. A recent study (Sims et al., 2006) which used lower per-area yield assumptions and bio-energy crop areas projected by the IMAGE 2.2 model suggested more modest potentials (22 EJ/yr) by 2025.
Based on assessment of other studies, Hoogwijk et al. (2003), indicated that marginal and degraded lands (including a land surface of 1.7 Gha worldwide) could, be it with lower productivities and higher production costs, contribute another 60-150 EJ. Differences among studies are largely attributable to uncertainty in land availability, energy crop yields, and assumptions on changes in agricultural efficiency. Those with the largest projected potential assume that not only degraded/surplus land are used, but also land currently used for food production (including pasture land, as did Smeets et al., 2007).
Converting the potential biomass production into a mitigation potential is not straightforward. First, the mitigation potential is determined by the lowest supply and demand potentials, so without the full picture (see Chapter 11) no estimate can be made. Second, any potential from bioenergy use will be counted towards the potential of the sectors where bioenergy is used (mainly energy supply and transport). Third, the proportion of the agricultural biomass supply compared to that from the waste or forestry sector cannot be specified due to lack of information on cost curves.
Top-down integrated assessment models can give an estimate of the cost competitiveness of bioenergy mitigation options relative to one another and to other mitigation options in achieving specific climate goals. By taking into account the various bioenergy supplies and demands, these models can give estimates of the combined contribution of the agriculture, waste, and forestry sectors to bioenergy mitigation potential. For achieving long-term climate stabilization targets, the competitive cost-effective mitigation potential of biomass energy (primarily from agriculture) in 2030 is estimated to be 70 to 1260 MtCO2-eq/yr (0-13 EJ/yr) at up to 20 US$/t CO2-eq, and 560-2320 MtCO2-eq/yr (0-21 EJ/yr) at up to 50 US$/tCO2-eq (Rose et al., 2007, USCCSP, 2006). There are no estimates for the additional potential from top down models at carbon prices up to 100 US$/tCO2-eq, but the estimate for prices above 100 US$/tCO2-eq is 2720 MtCO2-eq/yr (20-45 EJ/yr). This is of the same order of magnitude as the estimate from a synthesis of supply and demand presented in Chapter 11, Section 22.214.171.124. The mitigation potentials estimated by top-down models represent mitigation of 5-80%, and 20-90% of all other agricultural mitigation measures combined, at carbon prices of up to 20, and up to 50 US$/tCO2-eq, respectively. | <urn:uuid:17b5c3fc-325f-4c21-bf6e-1116f8e57fb8> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ipcc.ch/publications_and_data/ar4/wg3/en/ch8s8-4-4-2.html",
"date": "2016-09-26T22:34:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8843021988868713,
"token_count": 908,
"score": 3.40625,
"int_score": 3
} |
Description: In a sensor network where every node has a limited energy supply, one of the primary concerns is to maximize the network lifetime through energy-efficient routing. The method of the present invention includes a deterministic traffic scheduling algorithm that balances the load over multiple paths between source and destination, in proportion to their residual energy. This protocol focuses on uniformly utilizing the resources of the network, rather than on optimality of routes.|
Most existing sensor network routing protocols optimize for single or shortest path routing. This accelerates the failure of nodes lying along the often used optimal paths, thus adversely affecting the connectivity and hence life of the network.
For more information please contact Geoffrey Pinski at 513-558-5696 or [email protected] | <urn:uuid:24cdb6e8-ec39-41b8-abe5-4e4388c4eba7> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.ipo.uc.edu/index.cfm?fuseaction=technologies.results&item_number=103009",
"date": "2016-09-26T22:26:00",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8977299332618713,
"token_count": 150,
"score": 2.9375,
"int_score": 3
} |
‘It’s A Myth That Muslim Rulers Destroyed Thousands Of Temples’
REVATI LAUL | @revatilaul
Richard Eaton is the Wikipedia, the Google and, many would argue, the last word on medieval and Islamic history in India. His bibliography is too vast to list, but the vast repertoire includes Islamic History As Global History, The Rise of Islam and the Bengal Frontier, 1204 -1760 and Social History of the Deccan, 1300 -1761: Eight Indian Lives. After the destruction of the Babri Masjid and a myriad speculative conversations around how many temples Muslim rulers had destroyed in India, Eaton decided to count. That became a book titled Temple Desecration and Muslim States in Medieval India. In other words, he is the best myth-buster there is and that’s precisely what he did to the audiences at THiNK. Eaton explains why it’s crucial today for us to get our history right. Especially on the period he writes about.
EDITED EXCERPTS FROM AN INTERVIEW
You are now working on a magnum-opus history of medieval India, often construed as ‘the Muslim period’. Can you explain why the descriptor ‘Muslim period’ doesn’t work for you?
The book I’m working on now is called The Lion and the Lotus. The lion represents Persia and the Lotus, India. It’s the story of two intersecting megapolises — Persian and Sanskrit. The idea is to escape the trap of looking at this period as the endless and dreary chapter of Hindu-Muslim interaction, if not conflict, which is the conventional and historically wrong approach.
Can you explain why this is historically wrong?
Because religion is anachronistic. Contemporary evidence does not support the assumption that religion was the primary sign or indicator of cultural identity. That is a back projection from the 19th and 20th centuries, which is not justified by the evidence. For example, a word that was typically used to describe rulers who came from beyond the Khyber Pass was not ‘musalmaan’ but rather Turushka or Turk. An ethnic, not religious, identity. What’s fascinating is that the early Turkish rulers, the Ghaznavids, began as foreigners and conquerors; over time, they were behaving more and more like Rajput dynasties. Like Mahmud of Ghazni, for instance. He took the basic credo of Islam — “There is no god but Allah” — translated that into Sanskrit and put it down on the coinage to be freely minted in north-western India. It was an attempt to take Arabic words and structure them into Sanskrit vocabulary. This is a history of assimilation and not imposition. In Vijayanagar in the Deccan, you will find that most of the government buildings were built with arches and domes. You think you are inside a mosque but you are not. Vijayanagar had Hindu kings. This means that the aesthetic vision of Iran has seeped into India so much now that it’s accepted as normal.
What about the masses in this period from 1000 to 1800 AD, who were Hindu?
Okay, let’s talk about ordinary people. You find that languages like Telugu, Bengali, Kannada and Marathi have absorbed a huge amount of Persian vocabulary for everyday concerns. Take another example from the Vijayanagar empire in the south. I talk about south India because that’s where Islam did not have as long a penetration as in the north. The Vijayanagar kings had these long audience halls described as hundred-column and thousand-column palaces — hazaarsatoon. A concept that goes all the way back to Persepolis where you literally do have a hundred columns. You take the floor plan of Persepolis, Iran, in the 4th century BC, which is pre-Islamic, and place it side by side with the floor plan of a palace at Vijayanagar. It’s exactly the same. Neither was built by Muslims. Persepolis was built by Zoroastrians in the 3rd or 4th century BC. And Vijayanagar was built by Hindus in the 14th century AD. Neither has anything to do with religion, but both have everything to do with power. It’s like the present day spread of Coca Cola or Tuborg beer. It’s aspirational but not religious. And it all happens over a period of time.
Which is why you also don’t like the use of the word ‘conversions’ for this period? You say conversions suggest a pancake-like flip, which is not how Islam spread. What do you mean by that?
I hate the use of the word ‘conversions’. When I was studying the growth of Islam in Punjab, I came across a fascinating text on the Sial community. It traces their history from the 14th to the 19th century. If you look at the names of these people, you will find that the percentage of Arabic names increased gradually between the 14th and 19th centuries. In the early 14th century, they had no Arabic names. By the late 14th century, 5 percent had Arabic names. It’s not until the late 19th century that 100 percent had Arabic names. So, the identification with Islam is a gradual process because the name you give your child reflects your ethos and the cultural context in which you live. The same holds true when you look at the name assigned to god. In the 16th century, the words Muslims in Bengal used for god were Prabhu or Niranjan etc — Sanskrit or Bengali words. It’s not until the 19th century that the word Allah is used. In both Punjab and Bengal, the process of Islamisation is a gradual one. That’s why the word ‘conversion’ is misleading — it connotes a sudden and complete change. All your previous identities are thrown out. That’s not how it happens. When you talk about an entire society, you are talking about a very gradual, glacial experience.
You also examined at length the destruction of temples in this period. What did you find?
The temple discourse is huge in India and this is something that needs to be historicised. We need to look at the contemporary evidence. What do the inscriptions and contemporary chronicles say? What was so striking to me when I went into that project after the destruction of the Babri Masjid was that nobody had actually looked at the contemporary evidence. People were just saying all sorts of things about thousands of temples being destroyed by medieval Muslim kings. I looked at inscriptions, chronicles and foreign observers’ accounts from the 12th century up to the 18th century across South Asia to see what was destroyed and why. The big temples that were politically irrelevant were never harmed. Those that were politically relevant — patronised by an enemy king or a formerly loyal king who becomes a rebel — only those temples are wiped out. Because in the territory that is annexed to the State, all the property is considered to be under the protection of the State. The total number of temples that were destroyed across those six centuries was 80, not many thousands as is sometimes conjectured by various people. No one has contested that and I wrote that article 10 years ago.
Even the history of Aurangzeb, you say, is badly in need of rewriting.
Absolutely. Let’s start with his reputation for temple destruction. The temples that he destroyed were not those associated with enemy kings, but with Rajput individuals who were formerly loyal and then become rebellious. Aurangzeb also built more temples in Bengal than any other Mughal ruler.
(Published in Tehelka Magazine, Volume 10 Issue 47, Dated 23 November 2013) | <urn:uuid:fd351634-3f3e-4595-b8be-421e034aec7b> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.islamophobiatoday.com/2013/12/02/its-a-myth-that-muslim-rulers-destroyed-thousands-of-temples/",
"date": "2016-09-26T22:33:19",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9719210267066956,
"token_count": 1645,
"score": 2.734375,
"int_score": 3
} |
5 Handmade Objects Crafted By First World War POWs
Within the first six months of the First World War, more than 1.3 million prisoners were held in Europe. Accommodating so many POWs was a huge problem for all countries involved. Allegations of cruelty and neglect were commonplace.
Treaties covering the treatment of POWs were agreed before the war through the Hague and Geneva Conventions. But German propaganda reported widely on the brutality of Allied camps to encourage their soldiers to fight to the death as a preference to being captured. Likewise, in Britain it was claimed that Allied prisoners in Germany were systematically persecuted by order of the German government. Inspectors from neutral countries were called upon to check on camp conditions.
When war was declared in 1914, there was no system in place on either side for dealing with POWs. Camps were hastily set up according to need. Many camps were built from scratch but existing buildings were also utilised. The early camps were found to be over-crowded, though this situation improved in Britain once makeshift camps were replaced. Complaints about German camps centred on inadequate sanitation, housing and food (for which the Allied naval blockade was partly responsible), the nature of work assigned to prisoners and the brutal behaviour of the guards. POWs in Turkish camps, except for officers, were treated particularly harshly.
Prisoners of war of all nations produced a wide variety of handmade items. Some of these filled practical requirements, some were a way of coping with the monotony of captivity, while others were intended as a means of earning money for cigarettes or other comforts. Here are five examples of objects made by POWs during the First World War.
Mug made from a condensed milk tin by C J Peck whilst at Munster Prisoner of War camp in Germany between 1917 and 1918.
Beaded snake made by a Turkish prisoner of war.
Playing card box made by a German prisoner at Catterick in Yorkshire and acquired by Lieutenant W Staniland.
Conductor's baton made and used by Major Jack Shaw at Freiburg prison camp in Germany in 1917.
Box painted by a naval prisoner of war interned in Holland in 1914. | <urn:uuid:1d2e3765-74a1-4959-a8f1-469e67ce3524> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.iwm.org.uk/history/5-handmade-objects-crafted-by-first-world-war-pows",
"date": "2016-09-26T22:29:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9871879816055298,
"token_count": 450,
"score": 3.21875,
"int_score": 3
} |
HANOI – One of the rarest and most threatened mammals on Earth has been caught on camera in Vietnam for the first time in 15 years, renewing hope for the recovery of the species, an international conservation group said.
The Saola, a long-horned ox, was photographed by a camera in a forest in central Vietnam in September, the WWF said in a statement Wednesday.
“This is a breathtaking discovery and renews hope for the recovery of the species,” Van Ngoc Thinh, Vietnam country director for the WWF, was quoted as saying.
The animal was discovered in the remote areas of high mountains near the border with Laos in 1992, when a joint team of WWF and Vietnam’s forest control agency found a skull with unusual horns in a hunter’s home. The find proved to be the first large mammal new to science in more than 50 years and one of the only seven types of large mammal to be discovered in the 20th century.
In Vietnam, the last sighting of a Saola in the wild was in 1998, according to Dang Dinh Nguyen, director of the Saola natural reserve in the central province of Quang Nam.
In the area where the Saola was photographed, the WWF has recruited forest guards from local communities to remove snares and battle illegal hunting, the greatest threat to Saola’s survival, the statement said. The snares were set to largely catch other animals, such as deer and civets, which are a delicacy in Vietnam.
Twenty years after its discovery, little is known about Saola and the difficulty in detecting the elusive animal has prevented scientists from making a precise population estimate. At best, no more than few hundreds, and maybe only a few dozen, survive the remote, dense forests along the border with Laos, WWF said. | <urn:uuid:d6c8d869-6ef4-4e31-b48e-9781310ee0c6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.japantimes.co.jp/life/2013/11/13/environment/first-sighting-in-years-of-rare-saola-mammal-in-vietnam/",
"date": "2016-09-26T23:40:07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9611768126487732,
"token_count": 381,
"score": 3.125,
"int_score": 3
} |
Needless to say, a lot more people would likely choose to install solar panels on their roofs, if they had viable information as to how much power they can realistically expect to get from them. Solar panels are expensive after all. Now, Google is starting a new venture called Project Sunroof, which will go a long way towards remedying this problem.
The architecture firm OAS1S from Holland has come up with a unique proposal for a community of small houses, which would be built to resemble trees. The dwellings would all be made from recycled wood, and would function completely off-the-grid. These homes would be called “treescrapers” and the designers envision that once they are built, it would be like walking through a forest in the middle of an urban area.
The task of furnishing a new apartment can be quite daunting for many, given that the walls and windows just about always wreak havoc on your vision of what the place should look like. This won’t be a problem for those inhabiting the Vijayawada Garden Estate, a new apartment building in India, which is currently under construction.
The company Aleutia from the UK is currently in the process of building a school in every one of the 47 counties in Kenya. This will allow for the education of more than 20,000 primary school children, while the schools will also all be powered by solar energy. […]
The city of Mumbai, India is facing quite a shortage of adequate living spaces, so a project has been proposed for a temporary housing solution in the form of a two towers made of used shipping containers. The towers were designed by CRG Architects who decided on a cylinder shape for the structures, […]
For a few years now, the MoMA PS1’s Young Architects Program has been using art installations to help raise awareness about pressing environmental issues. This year, they are showcasing COSMO, a giant sculpture that also purifies water. It was designed by NYC and Madrid-based Spanish architect Andrés Jaque and aims to raise awareness about growing water shortage and the need for healthy water systems. | <urn:uuid:564d602e-207f-47d8-a0ee-b14a4d6547b1> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.jetsongreen.com/page/35",
"date": "2016-09-26T22:30:27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9779769778251648,
"token_count": 436,
"score": 2.5625,
"int_score": 3
} |
[JURIST] The British Overseas Agencies Group - representing Save the Children UK, Oxfam, Christian Aid and other UK-based humanitarian agencies - released a statement Thursday calling for immediate action, in accordance with international humanitarian law, to avert critical food shortages in Iraq:
The Geneva Conventions stipulate that the UK government and other warring parties must ensure the provision of food and other essential items such as medicines, water, and shelter to all those who need them, both during and after a conflict, including those whose supplies are cut off as a result of military action. 14 - 16 million Iraqis - two thirds of the entire population - currently depend on food rations provided through the UN's Oil for Food (OFF) programme and distributed by 45,000 food agents. It is essential that these supply and distribution systems continue to function during the conflict. The longer and more widespread the war, the less likely it is that this will happen, causing hunger to those who depend on this programme. The World Food Programme estimates that between 5 and 10 million people would become immediately vulnerable if OFF supplies are cut off. Therefore, as a matter of urgency, a new UN Security Council resolution is needed to establish alternative food distribution systems in the event of a breakdown of OFF distribution systems.In the same statement, the BOAG reminded the warring parties of their general legal responsibilities in the conduct of war itself:
[W]arring parties, including the UK government...have a legal obligation to take all necessary precautions to avoid civilian loss of life, under the Geneva Conventions. In accordance with International Humanitarian Law, civilians and installations essential to the survival of civilians, such as water and sanitation infrastructure, must not be targeted. Disproportionate harm to civilians through damage to dual-use infrastructure, such as roads and electricity supply, must also be avoided. Iraq's largely urban population relies on water pumping and treatment stations for its water and sanitation requirements. These stations in turn rely on electricity to function and could cease to operate without electricity. Attacks that do not distinguish between combatants and non-combatants are prohibited in international law. By their very nature, cluster bombs, fuel air bombs, landmines, chemical, biological, radiological and nuclear weapons can only be indiscriminate, in our opinion. There is a high potential for civilians to be trapped in cities throughout Iraq during this conflict. Even if Iraq deploys human shields close to military targets, forces attacking Iraq still have a responsibility to avoid disproportionate civilian casualties.Read the complete statement by BOAG. | <urn:uuid:0bc7ecf5-214b-41ba-9de8-bf0506edae9c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.jurist.org/paperchase/2003/03/food-and-geneva-conventions.php",
"date": "2016-09-26T23:59:44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9479029774665833,
"token_count": 515,
"score": 2.5625,
"int_score": 3
} |
Permanent scars on enamel. Swollen, infected gums. Lines and cavities. None of these should be present at the completion of orthodontic treatment, yet the negative consequences affect patients whose oral hygiene is less than acceptable. Plaque and food debris accumulate around braces, producing acid that leeches away calcium from the surfaces of the teeth. In addition, the very high acid content from soft drinks, soda, sports drinks, energy drinks, flavored water, and sour candies contribute to destroyed enamel. Inflamed gums also cause teeth to move more slowly as the body is busy fighting infection.
Battling scars, cavities, and gingivitis requires regular and thorough tooth brushing, several times per day. Only a toothbrush will dispute these "bugs" and remove food debris stuck in the braces. Regular recall cleanings and examinations at your dentist's office during orthodontics are also critical.
Dr. Bowman has written a series of research publications describing the benefits of the application of fluoride varnishes around braces to strengthen the enamel surface. These materials are painted on the teeth at regular intervals during braces. Varnishes have dramatically reduced incidence and severity of scars or "white spot lesions"; however, they are not the only solution.
Opal Seal – Recharge Bonding
Recent research has demonstrated the effectiveness of the application of a sealant on the front of teeth prior to placing braces. As a result of these results, we are not only use Halo fluoride varnish routinely, but we are also now applying Opal Seal for every patient starting treatment.
If you would like to know more about our approaches and the research behind them, click here. | <urn:uuid:095a5309-f631-49dd-91b1-ee760c1527d9> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.kalamazooorthodontics.com/About-Braces/Brushing-and-Flossing/Protecting-Your-Teeth.aspx",
"date": "2016-09-26T22:32:34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9415006637573242,
"token_count": 347,
"score": 2.828125,
"int_score": 3
} |
Carbon paper is thin paper coated with a mixture of wax and pigment, that is used between two sheets of ordinary paper to make one or more copies of an original document.
The exact origin of carbon paper is somewhat uncertain. The first documented use of the term "carbonated paper" was in 1806, when an Englishman, named Ralph Wedgwood, issued a patent for his "Stylographic Writer." However, Pellegrino Turri had invented a typewriting machine in Italy by at least 1808, and since "black paper" was essential for the operation of his machine, he must have perfected his form of carbon paper at virtually the same time as Wedgwood, if not before (Adler, 1973). Interestingly, both men invented their "carbon paper" as a means to an end; they were both trying to help blind people write through the use of a machine, and the "black paper" was really just a substitute for ink.
In its original form Wedgwood's "Stylographic Writer" was intended to help the blind write through the use of a metal stylus instead of a quill. A piece of paper soaked in printer's ink and dried, was then placed between two sheets of writing paper in order to transfer a copy onto the bottom sheet. Horizontal metal wires on the writing-board acted as feeler-guides for the stylus and presumably helped the blind to write.
[Although invented in 1803, the steel pen only became common around the middle of the nineteenth century; the quill was still in use at the end of the century, and remained the symbol of the handwriting age. First introduced in the laborious days of copying manuscripts in monasteries about the seventh century, the quill was the civilised world's writing tool for a thousand years or more (Proudfoot, 1972).]
A few years later, Wedgwood developed the idea into a method of making copies of private or business letters and other documents. These copies were made at the time of writing and relied on the ink-impregnated paper, which Wedgwood called "carbonated paper." The writer wrote with a metal stylus on a sheet of paper thin enough to be transparent, using one of the carbon sheets so as to obtain a black copy on another sheet of paper placed underneath. This other sheet of paper was a good quality writing paper and the "copy" on it formed the original for sending out. The retained copy was in reverse on the underside of the transparent top sheet, but since the paper was very thin (what we know today as "tissue" paper) it could be read from the other side where it appeared the correct way round.
Eventually a company was formed to market Wedgwood's technique, but although the company prospered and many "Writers" were sold, Wedgwood's process was not adopted by many businesses. There was still plenty of time, money and labour to handle office work, and businessmen generally preferred their outgoing letters to be written in ink, fearing that such an easy copying process would result in wholesale forgery. In addition, unlike James Watt's copying method of 1780, which developed into the letter-copying book and became standard procedure in the 1870s, carbon copies were not admissible in court.
Pellegrino Turri had very personal reasons for developing carbon paper. He fell in love with a young woman, the Countess Carolina Fantoni, who had become blind "in the flower of her youth and beauty" (Adler, 1973), and Turri resolved to build her a machine that would enable her to correspond with her friends (including him) in private. Although the machine he constructed no longer exists, several of the Countess' letters do, and from her correspondence it is clear that Turri's machine combined carbon paper and the typewriter in a way that did not become prevalent for another 65 years.
[On November 6, 1808 the Countess wrote "I am desperate because I find myself almost without black paper." The "black paper" was prepared by Turri, who was the Countess' only source of supply, and although she preserved his machine carefully ("I will never forget that it is a precious gift made by you"), Turri's typewriter disappeared after being returned to his son upon the Countess' death in 1841 (Adler, 1973).]
By 1823 Cyrus P. Dakin of Concord, Massachusetts, was making carbon paper similar to Wedgwood's, and selling it exclusively to the Associated Press. Forty-eight years later, the same Associated Press was covering the balloon ascent of Lebbeus H. Rogers; a promotional stunt in Cincinnati for the biscuit and grocery firm of which Rogers had just been made a partner. During an interview in the newspaper offices after the flight, Rogers happened to see Dakin's carbon paper and immediately saw its commercial potential for the copying of office documents. The firm of L.H. Rogers & Co. was immediately founded in New York, and in 1870 achieved its first major sale ($1,500) to the United States War Department (Sheridan, 1991). However, it was not until 1872 and the development of a practical typewriter for commercial office use (the Sholes and Glidden typewriter), that Rogers' vision was proven correct.
For the first time a good copy could be produced at the same time as a good original. Whereas carbon paper produced a good original with a pen or pencil, it did not always provide a good copy (carbon paper required adequate pressure in order to provide both); and although a metal stylus could give a good black copy, it did not produce a very legible original. The typewriter, on the other hand, produced excellent originals and copies, and carbon copying on the typewriter progressively became standard practice in the office.
Originally carbon paper was made entirely by hand. A mixture of carbon black (a pigment) and oil in naphtha (a solvent) was applied to sheets of paper using a wide brush. Eventually, Rogers' company developed the first carbon-coating machine, and introduced the use of hot wax applied by rollers to replace the messy oil applied by brush. In this way modern one-sided carbon paper came to be made in a variety of qualities (Proudfoot, 1972). Rogers went on to produce the first typewriter ribbons (essentially long thin strips of carbon paper), and after searching the world for material with the right texture, marketed typewriter ribbons wound on spools and packed in individual boxes, which he sold along with his packages of carbon paper.
One of the disadvantages of carbon paper was that no matter how good the paper or the writer's technique, it could only ever produce a limited number of copies. Given the continued growth of business and its need for better communication, including the development of advertising, a means of unlimited copying became increasingly necessary. This requirement led to the development of the stencil duplicator (the best known was probably David Gestetner's Cyclostyle, patented in 1880) and other similar inventions, all of which became alternatives to carbon paper (given the huge demand for all types of copying processes at this time, demand for carbon paper was not immediately affected by the introduction of these other methods).
From the very beginning however, carbon paper could only produce copies of out-going correspondence (the stencil duplicator had this disadvantage as well); if copies were needed of incoming documents, they still had to be copied by hand. This problem was not solved until the middle of the twentieth century, when xerography became commercially available in the form of the photocopier (Proudfoot, 1972). The invention of the photocopier began the decline in demand for carbon paper that has continued to the present day.
Although the photocopier probably struck the biggest blow to carbon paper and other early methods of copying, a technology was developed around the same time with the potential to eliminate carbon paper entirely. NCR, or No Carbon Required paper, was developed by the National Cash Register Company in 1954 (Nielsen, 1983). This process relied on the pressure of a pen or typewriter to induce a chemical reaction between different coatings on adjacent sheets of paper. The original was produced by the pen or typewriter, while the chemical reaction left a blue copy sharply delineated on subsequent pages. NCR is ideal for business forms produced in large quantities, but is not economical for small applications. Consequently, it has yet to replace carbon paper completely.
Carbon paper is still commercially available today (1995). However, its use has declined significantly in the last 20 years, despite the proliferation of copying in the modern office over the same period. Perhaps it will continue to be used until the "paperless office" becomes a reality, or perhaps it will always be ideal for some applications. Regardless of its ultimate fate, carbon paper has already left its mark on one of the most recent technologies to enter the workplace: many electronic mail computer programs (Email) include the abbreviation "cc" to indicate the recipients of a "carbon copy" of the electronic message.
Adler, Michael H. (1973) The writing machine (London: George Allen & Unwin Ltd.)
Adler, Michael (1990) Wedgwood's carbon paper of 1806. Typewriter Times: Journal of the Anglo-American Typewriter Collector's Society. (18): 6-7.
Barber, A. (1994) STP Ltd. Personal Communication.
Blythin, David (1994) Kores Nordic (GB) Ltd. Personal communication.
Brown, Curtis L. (1991) Thesaurus of pulp and paper terminology (Atlanta: Institute of Paper Science and Technology Inc.)
Dale, Rodney & Weaver, Rebecca (1993) Machines in the office (London: British Library)
Fiddes, D. W. (1979) Business terms, phrases and abbreviations (London: Pitman)
Lavigne, John R. (1993) Pulp & paper dictionary (San Francisco: Miller Freeman Books)
Lippman, Paul (1992) American typewriters: a collector's encyclopedia (Hoboken, N.J.: Original & Copy)
McNeil, Ian (1990) An encyclopaedia of the history of technology (London: Routledge)
Nielsen, Norman A. (1983) Reprography. In Pulp and paper: chemistry and chemical technology, third edition, Vol. 4, edited by James P. Casey (New York: John Wiley & Sons, Inc.)
Proudfoot, W. B. (1972) The origin of stencil duplicating (London: Hutchinson & Co. Ltd.)
Sheridan, David (1991) Carbon paper and the typewriter ribbon. The Type Writer: Journal of Writing Machine Technology and History. (1): 15.
Simpson, J. A. & Weiner, E. S. C. (1989) The Oxford English dictionary, second edition (Oxford: Clarendon Press)
Sinclair, John ed. (1993) BBC English dictionary (London: HarperCollins Publishers Ltd.)
Webber, Paul (1994) Royal Sovereign Ltd. Personal communication.
© 1995 Kevin M. Laurence
E-mail: Kevin Laurence
Click here to return to my homepage. | <urn:uuid:8779f797-be8b-4f84-85de-b5601e5d0fd6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.kevinlaurence.net/essays/cc.php",
"date": "2016-09-26T22:22:32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9611000418663025,
"token_count": 2337,
"score": 3.6875,
"int_score": 4
} |
Intelligence as an Emergent Behavior or, The Songs of Eden
May 2, 2002 by W. Daniel Hillis
Could we build a thinking machine by simply hooking together a large network of artificial neurons and waiting for intelligence to spontaneously emerge? Not likely, but by studying the properties of biological and emergent systems, a carefully constructed network of artificial neurons could be inoculated with thought, similar to yeast’s role in making beer. The clue may be in the “songs” of apes.
Originally published Winter 1988 in Daedalus, Journal of the American Academy of Arts and Sciences. Published on KurzweilAI.net on May 2, 2002.
Sometimes a system with many simple components will exhibit a behavior of the whole that seems more organized than the behavior of the individual parts. Consider the intricate structure of a snowflake. Symmetric shapes within the crystals of ice repeat in threes and sixes, with patterns recurring from place to place and within themselves at different scales. The shapes formed by the ice are consequences of the local rules of interaction that govern the molecules of water, although the connection between the shapes and the rules is far from obvious. After all, these are the same rules of interaction that cause water to suddenly turn to steam at its boiling point and cause whirlpools to form in a stream. The rules that govern the forces between water molecules seem much simpler than crystals or whirlpools or boiling points, yet all of these complex phenomena are called emergent behaviors of the system.
It would be very convenient if intelligence were an emergent behavior of randomly connected neurons in the same sense that snowflakes and whirlpools are the emergent behaviors of water molecules. It might then be possible to build a thinking machine by simply hooking together a sufficiently large network of artificial neurons. The notion of emergence would suggest that such a network, once it reached some critical mass, would spontaneously begin to think.
This is a seductive idea, since it allows for the possibility of constructing intelligence without first understanding it. Understanding intelligence is difficult and probably a long way off. The possibility that it might spontaneously emerge from the interactions of a large collection of simple parts has considerable appeal to a would-be builder of thinking machines. Unfortunately, as a practical approach to construction, the idea tends to be unproductive. The concept of emergence, in itself, offers neither guidance on how to construct such a system nor insight into why it would work.
Ironically, this apparent inscrutability accounts for much of the idea’s continuing popularity, since it offers a way to believe in physical causality while simultaneously maintaining the impossibility of a reductionist explanation of thought. For some, our ignorance of how local interactions produce emergent behavior offers a reassuring fog in which to hide free will.
There has been a renewal of interest in emergent behavior in the form of neural networks and connectionist models, spin glasses and cellular automata, and evolutionary models. The reasons for this interest have little to do with philosophy in one way or the other, but rather are a combination of new insights and new tools. The insights come primarily from a branch of physics called "dynamical systems theory." The tools come from the development of new types of computing devices. Just as in the 1950′s we thought of intelligence in terms of servomechanism, and in the 60′s and 70′s in terms of sequential computers, we are now beginning to think in terms of parallel machines. This is not a deep philosophical shift, but it is of great practical importance, since it is now possible to study large emergent systems experimentally.
Inevitably, anti-reductionists interpret such progress as a schism within the field between symbolic rationalists who oppose them and gestaltists who support them. I have often been asked which "side" I am on. Not being a philosopher, my inclination is to focus on the practical aspects of this question: How would we go about constructing an emergent intelligence? What information would we need to know in order to succeed? How can this information be determined by experiment?
The emergent system that I can most easily imagine would be an implementation of symbolic thought, rather than a refutation of it. Symbolic thought would be an emergent property of the system. The point of view is best explained by the following parable about the origin of human intelligence. As far as I know, this parable of human evolution is consistent with the available evidence (as are many others), but since it is chosen to illustrate a point it should be read as a story rather than as a theory. It is reversed from most accepted theories of human development in that it presents features that are measurable in the archeological records such as increased brain size, food sharing, and neoteny, as consequences rather than as causes of intelligence.
Once upon a time, about two and a half million years ago, there lived a race of apes that walked upright. In terms of intellect and habit they were similar to modern chimpanzees. The young apes, like many young apes today, had a tendency to mimic the actions of others. In particular, they had a tendency to imitate sounds. If one ape went "ooh, eeh, eeh," it would be likely that the other one would repeat, "ooh, eeh, eeh." (I do not know why apes do this, but they do. As do many species of birds.) Some sequences of sounds were more likely to be repeated than others. I will call these "songs."
For the moment let us ignore the evolution of the apes and consider the evolution of the songs. Since the songs were replicated by the apes, and since they sometimes died away and were occasionally combined with others, we may consider them, very loosely, a form of life. They survived, bred, competed with one another, and evolved according to their own criterion of fitness. If a song contained a particularly catchy phrase that caused it to be repeated often, then that phrase was likely to be repeated and incorporated into other songs. Only songs that had a strong tendency to be repeated survived.
The survival of the song was only indirectly related to the survival of the apes. It was more directly affected by the survival of other songs. Since the apes were a limited resource, the songs had to compete with one another for a chance to be sung. One successful strategy for competition was for a song to specialize; that is, for it to find a particular niche where it would be likely to be repeated. Songs that fit particularly well with a specific mood or activity of an ape had a special survival value for this reason. (I do not know why some songs fit well with particular moods, but since it is true for me I do not find it hard to believe for my ancestors.)
Up to this point the songs were not of any particular value to the apes. In a biological sense they were parasites, taking advantage of the apes’ tendency to imitate. Once the songs began to specialize, however, it became advantageous for an ape to pay attention to the songs of others and to differentiate between them. By listening to songs, a clever ape could gain useful information. For example, an ape could infer that another ape had found food, or that it was likely to attack. Once the apes began to take advantage of the songs, a mutually beneficial symbiosis developed. Songs enhanced their survival by conveying useful information. Apes enhanced their survival by improving their capacity to remember, replicate, and understand songs. The blind forces of evolution created a partnership between the songs and the apes that thrived on the basis of mutual self-interest. Eventually this partnership evolved into one of the world’s most successful symbionts: us.
Unfortunately, songs do not leave fossils, so unless some natural process has left a phonographic trace, we may never know if this is what really happened. But if the story is true, the apes and the songs became the two components of human intelligence. The songs evolved into the knowledge, mores, and mechanism of thought that together are the symbolic portion of human intelligence. The apes became apes with bigger brains, perhaps optimized for late maturity so that they could learn more songs. "Homo Sapiens" is a cooperative combination of the two.
It is not unusual in nature for two species to live together so interdependently that they appear to be a single organism. Lichens are symbionts of a fungus and an alga living so closely intertwined that they can only be separated under a microscope. Bean plants need living bacteria in their roots to fix the nitrogen from the soil, and in return the bacteria need nutrients from the bean plants. Even the single celled "Paramecium Bursarra" uses green algae living inside itself to synthesize food.
There may be an example even closer to the songs and the apes, where two entirely different forms of "life" form a symbiosis. In "The Origins of Life," Freeman Dyson suggests that biological life is a symbiotic combination of two different self-reproducing entities with very different forms of replication. Dyson suggests that life originated in two stages. While most theories of the origin of life start with nucleotides replicating in some "primeval soup," Dyson’s theory starts with metabolizing drops of oil.
In the beginning, these hypothetical replicating oil drops had no genetic material, but were self-perpetuating chemical systems that absorbed raw materials from their surroundings. When a drop reached a certain size it would split, with about half of the constituents going to each part. Such drops evolved efficient metabolic systems even though their rules of replication were very different from the Mendelian rules of modern life. Once the oil drops became good at metabolizing, they were infected by another form of replicators, which, like the songs, have no metabolism of their own. These were parasitic molecules of DNA which, like modern viruses, took advantage of the existing machinery of the cells to reproduce. The metabolizers and the DNA eventually co-evolved into a mutually beneficial symbiosis that we know today as life.
This two-part theory of life is not conceptually far from the two-part story of intelligence. Both suggest that a pre-existing homeostatic mechanism was infected by an opportunistic parasite. The two parts reproduced according to different set of rules, but were able to co-evolve so successfully that the resulting symbiont appears to be a single entity.
Viewed in this light, choosing between emergence and symbolic computation in the study of intelligence would be like choosing between metabolism and genetic replication in the study of life. Just as the metabolic system provides a substrate in which the genetic system can work, so an emergent system may provide a substrate in which the symbolic system can operate.
Currently, the metabolic system of life is far too complex for us to fully understand or reproduce. By comparison, the Mendelian rules of genetic replication are almost trivial, and it is possible to study them as a system unto themselves without worrying about the details of metabolism which supports them. In the same sense, it seems likely that symbolic thought can be fruitfully studied and perhaps even recreated without worrying about the details of the emergent system that supports it. So far this has been the dominant approach in artificial intelligence and the approach that has yielded the most progress.
The other approach is to build a model of the emergent substrate of intelligence. This artificial substrate for thought would not need to mimic in detail the mechanisms of the biological system, but it would need to exhibit those emergent properties that are necessary to support the operations of thought.
What is the minimum that we would need to understand in order to construct such a system? For one thing, we would need to know how big a system to build. How many bits are required to store the acquired portion of human knowledge of a typical human? We need to know an approximate answer in order to construct an emergent intelligence with human-like performance. Currently the amount of information stored by a human is not known to within even two orders of magnitude, but it can in principle be determined by experiment. There are at least three ways the question might be answered.
One way to estimate the storage requirements for emergent intelligence would be from an understanding of the physical mechanisms of memory in the human brain. If that information is stored primarily by modifications of synapses, then it would be possible to measure the information storage capacity of the brain by counting the number of synapses. Elsewhere in this issue, Schwartz shows that this method leads to an upper bound on the storage capacity of the brain of 10 to the 15th bits. Even knowing the exact amount of physical storage in the brain would not completely answer the question of storage requirement, since much of the potential storage might be unused or used inefficiently. But at least this method can help establish an upper bound on the requirements.
A second method for estimating the information in symbolic knowledge would be to measure it by some form of statistical sampling. For instance, it is possible to estimate the size of an individual’s vocabulary by testing specific words randomly sampled from a dictionary. The fraction of words known by the individual is a good estimate of the fraction of words known in the complete dictionary. The estimated vocabulary size is this fraction times the number of words in the dictionary. The experiment depends on having a predetermined body of knowledge against which to measure. For example, it would be possible to estimate how many facts in the "Encyclopedia Britannica" were known by a given individual, but this would give no measure of facts not contained within the encyclopedia. The method is useful only in establishing a lower bound.
A related experiment is the game of "20 questions" in which one player identifies an object chosen by the other by asking a series of 20 yes-or-no questions. Since each answer provides no more than a single bit of information, and since skillful players generally require almost all of the 20 questions to choose correctly, we can estimate that the number of allowable choices is on the order of 2 to the 20th, or about one million. This gives an estimated number of allowable objects known in common by the two players. Of course, the measure is inaccurate since the questions are not perfect and the choices of objects are not random. It is possible that a refined version of the game could be developed and used to provide another lower bound.
A third approach to measuring the amount of information of the symbolic portion of human knowledge is to estimate the rate of acquisition and to integrate over time. For example, experiments on memorizing random sequences of syllables indicate that the maximum memorization rate of this type of knowledge is about one "chunk" per second. A "chunk" in this context can be safely assumed to contain less than 100 bits of information, so the results suggest that the maximum rate that a human is able to commit information to long-term memory is significantly less than 100 bits per second. If this is true, a 20-year-old human learning at the maximum rate for 16 hours a day would know less than 50 gigabits of information. I find this number surprisingly small.
A difficulty with this estimate of the rate of acquisition is that the experiment measures only information coming through one sensory channel under one particular set of circumstances. The visual system sends more than a million times this rate of information to the optic nerve, and it is conceivable that all of this information is committed to memory. If it turns out that images are stored directly, it will be necessary to significantly increase the 100 bit per second limit, but there is no current evidence that this is the case. In experiments measuring the ability of exceptional individuals to store "eidetic" images of random dot stereograms, the subjects are given about 5 minutes to "memorize" a 128×128 image. Memorizing only a few hundred of these bits is probably sufficient to pass the test.
I am aware of no evidence that suggests more than a few bits per second of any type of information can be committed to long-term memory. Even if we accept at face value reports of extraordinary feats of memory, such as those of Luria’s showman in "Mind of the Mnemonist", the average rate of commitment to memory never seems to exceed a few bits per second. Experiments should be able to refine this estimate, but even if we knew the maximum rate exactly, the rate averaged over a lifetime would probably be very much less. Knowing the maximum rate would establish an upper bound on the requirements of storage.
The sketchy data cited above would suggest that an intelligent machine would require 10 to the 9th bits of storage, plus or minus two orders of magnitude. This assumes that the information is encoded in such a way that it requires a minimum amount of storage, which for the purpose of processing information would probably not be the most practical representation. As a would-be builder of thinking machines, I find this number encouragingly small, since it is well within the range of current electronic computers. As a human with an ego, I find it distressing. I do not like to think that my entire lifetime of memories could be placed on a reel of magnetic tape. Hopefully experimental evidence will clear this up one way or the other.
There are a few subtleties in the question of storage requirements, in defining the quantity of information in a way that is independent of the representation. Defining the number of bits in the information-theoretical sense requires a measure of the probabilities over the ensemble of possible states. This means assigning an "a priori" probability to each possible set of knowledge, which is the role of inherited intelligence. Inherited intelligence provides a framework in which the knowledge of acquired intelligence can be interpreted. Inherited intelligence defines what is knowable; acquired intelligence determines which of the knowable is known.
Another potential difficulty is how to count the storage of information that can be deduced from other data. In the strict information-theoretical sense, data that can be inferred from other data add no information at all. An accurate measure would have to take into account the possibility that knowledge is inconsistent, and that only limited inferences are actually made. These are the kind of issues currently being studied on the symbolic side of the field of artificial intelligence.
One issue that does not need to be resolved to measure storage capacity is distributed versus localized representation. Knowing what types of representation are used in what parts of the human brain would be of considerable scientific interest, but it does not have a profound impact on the amount of storage in the system, or on our ability to measure it. Non-technical commentators have a tendency to attribute almost mystical qualities to distributed storage mechanisms such as those in holograms and neural networks, but the limitations on their storage capacities are well understood.
Distributed representations with similar properties are often used within conventional digital computers, and they are invisible to most users except in the system’s capacity to tolerate errors. The error correcting memory used in most computers is a good example. The system is composed of many physically separate memory chips, but any single chip can be removed without loosing any data. This is because the data is not stored in any one place, but in a distributed non-local representation across all of the units. In spite of the "holographic" representation, the information storage capacity of the system is no greater than it would be with a conventional representation. In fact, it is slightly less. This is typical of distributed representations.
Storage capacity offers one measure of the requirements of a human-like emergent intelligence. Another measure is the required rate of computation. Here there is no agreed upon metric, and it is particularly difficult to define a unit of measure that is completely independent of representation. The measure suggested below is simple and the answer is certainly important, if not sufficient.
Given an efficiently stored representation of human knowledge, what is the rate of access to that storage (in bits per second) required to achieve human-like performance? Here "efficiently stored representation" means any representation requiring only a multiplicative constant of storage over the number of bits of information. This restriction eliminates the formal possibility of a representation storing a pre-computed answer to every question. Allowing storage within a multiplicative constant of the optimum does restrict the range of possible representations, but it allows most representations that we would regard as reasonable. In particular, it allows both distributed and local representations.
The question of the bandwidth required for human-like performance is accessible by experiment, along similar approaches as those outlined for the question of storage capacity. If the "cycle time" of human memory is limited by the firing time of a neuron, then the ratio of this answer to the total number of bits tells the fraction of the memory that is accessed simultaneously. This gives an indication of the parallel or serial nature of the computation. Informed opinions differ greatly in this matter. The bulk of the quantitative evidence favors the serial approach. Memory retrieval times for items in lists, for example, depend on the position and the number of items in the list. Except for sensory processing, most successful artificial intelligence programs have been based on serial models of computation, although this may be a distortion caused by the availability of serial machines.
My own guess is that the reaction time experiments are misleading and that human-level performance will require accessing of large fractions of the knowledge several times per second. Given a representation of acquired intelligence with a realistic representation efficiency of 10%, the 10 to the 9th bits of memory mentioned above would require a memory bandwidth about 10 to the 11th bits per second. This bandwidth seems physiologically plausible since it corresponds to about a bit per second per neuron in the cerebral cortex.
By way of comparison, the memory bandwidth of a conventional electronic computer is in the range of 10 to the 6th to 10 to the 8th bits per second. This is less than 0.1% of the imagined requirement. For parallel computers the bandwidth is considerably higher. For example, a 65,536 processor Connection Machine can access its memory at approximately 10 to the 11th bits per second. It is not entirely coincidence that this fits well with the estimate above.
Another important question is: What sensory-motor functions are necessary to sustain symbolic intelligence? An ape is a complex sensory-motor machine, and it is possible that much of this complexity is necessary to sustain intelligence. Large portions of the brain seem to be devoted to visual, auditory, and motor processing, and it is unknown how much of this machinery is needed for thought. A person who is blind and deaf or totally paralyzed can undoubtedly be intelligent, but this does not prove that the portion of the brain devoted to these functions is unnecessary for thought. It may be, for example, that a blind person takes advantage of the visual processing apparatus of the brain for spatial reasoning.
As we begin to understand more of the functional architecture of the brain, it should be possible to identify certain functions as being unnecessary for thought by studying patients whose cognitive abilities are unaffected by locally confined damage to the brain. For example, binocular stereo fusion is known to take place in a specific area of the cortex near the back of the head. Patients with damage to this area of the cortex have visual handicaps, but show no obvious impairment in their ability to think. This suggests that stereo fusion is not necessary for thought. This is a simple example, and the conclusion is not surprising, but it should be possible by such experiments to establish that many sensory-motor functions are unnecessary. One can imagine, metaphorically, whittling away at the brain until it is reduced to its essential core. Of course it is not quite this simple. Accidental damage rarely incapacitates completely and exclusively a single area of the brain. Also, it may be difficult to eliminate one function at a time since one mental capacity may compensate for the lack of another.
It may be more productive to assume that all sensory-motor apparatus is unnecessary until proven useful for thought, but this is contrary to the usual point of view. Our current understanding of the philogenic development of the nervous system suggests a point of view in which intelligence is an elaborate refinement of the connection between input and output. This is reinforced by the experimental convenience of studying simple nervous systems, or studying complicated nervous systems by concentrating on those portions most directly related to input and output. By necessity, most everything we know about the function of the nervous system comes from experiments on those portions that are closely related to sensory inputs or motor outputs. It would not be surprising if we have overestimated the importance of these functions to intelligent thought.
Sensory-motor functions are clearly important for the application of intelligence and for its evolution, but these are separate issues from the question above. Intelligence would not be of much use without an elaborate system of sensory apparatus to measure the environment and an elaborate system motor apparatus to change it, nor would it have been likely to have evolved. But the apparatus necessary to exercise and evolve intelligence is probably very much more than the apparatus necessary to sustain it. One can believe in the necessity of the opposable thumb for the development of intelligence, without doubting a human capacity for thumbless thought. It is quite possible that even the meager sensor-motor capabilities that we currently know how to provide would be sufficient for the operation of emergent intelligence.
These questions of capacity and scope are necessary in defining the magnitude of the task of constructing an emergent intelligence, but the key question is one of understanding. While it is possible that we will be able to recreate the emergent substrate of intelligence without fully understanding the details of how it works, it seems likely that we would at least need to understand some of its principles. There are at least three paths by which such understanding could be achieved. One is to study the properties of specific emergent systems, to build a theory of their capabilities and limitations. This kind of experimental study is currently being conducted on several classes of promising systems including neural networks, spin glasses, cellular automata, classifier systems and adaptive automata. Another possible path to understanding is the study of biological systems, which are our only real examples of intelligence, and our only example of an emergent system which has produced intelligence. The disciplines that have provided the most useful information of this type so far have been neurophysiology, cognitive psychology, and evolutionary biology. A third path would be a theoretical understanding of the requirements of intelligence, or of the phenomena of emergence. Examples of relevant disciplines of theories of logic and computability, linguistics, and dynamical systems theory. Anyone who looks to emergent systems as a way of defending human thought from the scrutiny of science is likely to be disappointed.
One cannot conclude, however, that a reductionist understanding is necessary for the creation of intelligence. Even a little understanding could go a long way toward the construction of an emergent system. A good example of this is how cellular automata have been used to simulate the emergent behavior of fluids.
The whirlpools that form as a fluid flows past a barrier are not well understood analytically, yet they are of great practical importance in the design of boats and airplanes. Equations that describe the flow of a fluid have been known for almost a century, but except for a few simple cases they cannot be solved. In practice the flow is generally analyzed by simulation. The most common method of simulation is the numerical solution of the continuous equations.
On a highly parallel computer it is possible to simulate fluids with even less understanding of the system, by simulating billions of colliding particles that reproduce the emergent phenomena such as vortices. Calculating the detailed molecular interactions for so many particles would be extremely difficult, but a few simple aspects of the system such as conservations of energy and particle number are sufficient to reproduce the large-scale behavior. A system of simplified particles that obey these two laws, but are otherwise unrealistic, can reproduce the same emergent phenomena as reality. For example, it is possible to use particles of unit mass that move only at unit speed along a hexagonal lattice, colliding according to the rules of billiard balls. Experiments show that this model produces laminar flow, vortex streams, and even turbulence that is indistinguishable from the behavior of real fluids. Although the detailed rules of interaction are very different than the interactions of real molecules, the emergent phenomena are the same. The emergent phenomena can be created without understanding the details of the forces between the molecules or the equations that describe the flow of the fluid.
The recreation of intricate patterns of ebbs and flows within a fluid offers an example of how it is possible to produce a phenomenon without fully understanding it. But the model was constructed by physicists who knew a lot about fluids. That knowledge helped to determine which features of the physical system were important to implement, and which were not.
Physics is an unusually exact science. Perhaps a better example of an emergent system which we can simulate with only a limited understanding is evolutionary biology. We understand, in a weak sense, how creatures with Mendelian patterns of inheritance, and different propensities for survival can evolve toward better fitness in their environments. In certain simple situations we can even write down equations that describe how quickly this adaptation will take place. But there are many gaps in our understanding of the processes of evolution. We can explain in terms of natural selection why flying animals have light bones, but we cannot explain why certain animals have evolved flight and others have not. We have some qualitative understanding of the forces that cause evolutionary change, but except in the simplest cases, we cannot explain the rate or even the direction of that change.
In spite of these limitations, our understanding is sufficient to write programs of simulated evolution that show interesting emergent behaviors. For example, I have recently been using an evolutionary simulation to evolve programs to sort numbers. In this system, the genetic material of each simulated individual is interpreted as a program specifying a pattern of comparisons and exchanges. The probability of an individual survival in the system is dependent on the efficiency and accuracy of this program in sorting numbers. Surviving individuals produce offspring by sexual combination of their genetic material with occasional random mutation. After tens of thousands of generations, a population of hundreds of thousands of such individuals will evolve very efficient programs for sorting. Although I wrote the simulation producing these sorting programs, I do not understand in detail how they were produced or how they work. If the simulation had not produced working programs, I would have had very little idea about how to fix it.
The fluid flow and simulated evolution examples suggest that it is possible to make a great deal of use of a small amount of understanding. The emergent behaviors exhibited by these systems are a consequence of the simple underlying rules, which are defined by the program. Although the systems succeed in producing the desired results, their detailed behaviors are beyond our ability to analyze and predict. One can imagine if a similar process produced a system of emergent intelligence, we would have a similar lack of understanding about how it worked.
My own guess is that such an emergent system would not be an intelligent system itself, but rather the metabolic substrate on which intelligence might grow. In terms of the apes and the songs, the emergent portion of the system would play the role of the ape, or at least that part of the ape that hosts the songs. This artificial mind would need to be inoculated with human knowledge. I imagine this process to be not so different from teaching a child. This would be a tricky and uncertain procedure since, like a child, this emergent mind would presumably be susceptible to bad ideas as well as good. The result would be not so much an artificial intelligence, but rather a human intelligence sustained within an artificial mind.
Of course, I understand that this is just a dream. And I will admit that I am more propelled by hope than by the probability of success. But if, within this artificial mind, the seed of human knowledge begins to sustain itself and grow of its own accord, then for the first time human thought will live free of bones and flesh, giving this child of mind an earthly immortality denied to us.
Attempts to create emergent intelligence, at least those that are far enough in the past for us to judge, have been disappointing. Many computational systems, such as homeostats, perceptrons, and cellular automata exhibit clear examples of emergent behavior, but that behavior falls far short of intelligence. A perceptron, for example, is a collection of artificial neurons that can recognize simple patterns. Considerable optimism was generated in the 1960′s when it was proved that anything a perceptron could recognize, it could learn to recognize from examples. This was followed by considerable disappointment when it was realized that the set of things that could be recognized at all was very limited. What appeared to be complicated behavior of the system turned out in the final analysis to be surprisingly simple.
In spite of such disappointments, I believe that the notion of emergence contains an element of truth, an element that can be isolated and put to use.
A helpful analogy is the brewing of beer. The brewmaster creates this product by making a soup of barley and hops, and infecting it with yeast. Chemically speaking most of the real work is done by the yeast, which converts the starch to alcohol. The brewmaster is responsible for creating and maintaining the conditions under which that conversion can take place. The brewmaster does not need to understand exactly how the yeast does its work, but does need to understand the properties of the environment in which the yeast will thrive. By providing the right combination of ingredients at the right temperature in the right container, the brewmaster is able to create the necessary conditions for the production of beer.
Something analogous to this process may be possible in the creation of an artificial intelligence. It is unlikely that intelligence would spontaneously appear in a random network of neurons, just as it is unlikely that life would spontaneously appear in barley soup. But just as carefully mixed soup can be inoculated with yeast, it may be that a carefully constructed network of artificial neurons can be inoculated with thought.
The approach depends on the possibility of separating human intelligence into two parts, corresponding to the soup and the yeast. Depending on one’s point of view, these two parts can be viewed as hardware and software, intellect and knowledge, nature and nurture, or program and data. Each point of view carries with it a particular set of intuitions about the nature of the split and the relative complexity of the parts.
One way that biologists determine if a living entity is a symbiont is to see if the individual components can be kept alive separately. For example, biologists have tried (unsuccessfully) to prove the oil-drop theory by sustaining metabolizing oil drops in an artificial nutrient broth. Such an experiment for human intelligence would have two parts. One would be a test of the human ape’s ability to live without the ideas of human culture. This experiment is occasionally conducted in an uncontrolled form when feral children are reared by animals. The two-part theory would predict that such children, before human contact, would not be significantly brighter than nonhuman primates. The complementary experiment, sustaining human ideas and culture in an artificial broth, is the one in which we are more specifically interested. If this were successful we would have a thinking machine, although perhaps it would not be accurate to call it an artificial intelligence. It would be natural intelligence sustained within an artificial mind.
To pursue the consequences of this point of view, we will assume that human intelligence can be cleanly divided into two portions that we will refer to as acquired and inherited intelligence. These correspond to the songs and to the apes, respectively, or in the fermentation metaphor, the yeast and the barley soup. We will consider only those features of inherited intelligence that are necessary to support acquired intelligence, only those features of acquired intelligence that impose requirements on inherited intelligence. We will study the interface between the two.
Even accepting this definition of the problem, it is not obvious that the interface is easy to understand or recreate. This leads to a specific question about the scope of the interface that can presumably be answered by experiment.
The functional scope of the interface between acquired and inherited intelligence is not the only property that can be investigated. To build a home for an animal, the first thing we would need to know is the animal’s size. This is also one of the first things we need to know in building an artificial home for acquired intelligence. This leads to question number two:
The guesses to answers that I have given are imprecise, but the questions are not. In principle they can be answered by experiment. The final question I will pose is more problematic. What I would like to ask is "What are the organizing principles of inherited intelligence?" but this question is vague and it is not clear what would be an acceptable answer. I shall substitute a more specific question that hopefully captures the same intent:
"Question IV: What quantities remain constant during the computation of intelligence; or, equivalently, what functions of state are minimized?"
This question assumes that inheritable intelligence is some form of homeostatic process and asks what quantity is held static. It is the most difficult of the four questions, but historically it has been an important question to ask in areas when there was not yet a science to guide progress.
The study of chemistry is one example. In chemical reactions between substances it is obvious that a great number of things change and not so obvious what stays the same. It turns out that if the experiment is done carefully, the weight of the reactants will always equal the weight of the product. The total weight remains the same. This is an important organizing principle in chemistry and understanding it was a stepping stone to the understanding of an even more important principle: the conservation of the weights of the individual elements. The technical difficulty of defining and creating a truly closed experiment, in particular eliminating the inflow and outflow of gases, explains why chemists did not fully appreciate these principles until the middle of the 19th century.
Another very different example of a system that can be understood in terms of what is held constant is the system of formal logic. This is a set of rules under which sentences may be changed without changing their truth. A similar example, which has also been important to artificial intelligence, is the lambda calculus, which is the basis of the language Lisp. This is a system of transforming expressions in such a way that their "values" do not change, where the values are those forms of the expression which are not changed by the transformations. (This sounds circular because it is. A more detailed explanation would show it to be more so.) These formal systems are conceptually organized around that which is held constant.
In physics there are many examples of how conservations have been used successfully to organize our conception of reality, but while conservations of energy, momentum, mass, and charge are certainly important, I do not wish to make too much of them in this context. In this sense the principles of conservation will more likely resemble those of biology than physics.
One of the most useful conservation principles in biology appears in the notion of a gene. This is the unit of character determination that is conserved during reproduction. In sexual reproduction this can get complicated since an individual receives a set of genes from each of two parents. A gene that affects a given trait may not be expressed if it is masked by another, and there is not a simple correspondence between genes and measurable traits. The notion that atomic units of inheritance are always present, even when they are not expressed, was hard to accept and it was not widely believed almost a century after Mendel’s initial experiments. In fact the conservation is not perfect, but it is still one of the most important organizing principles in the study of living organisms.
In biology, the rules of conservation are often expressed as minimum principles. The two forms are equivalent. For instance, the minimum principle corresponding to the physical conservation of momentum is the principle of least action. A biological example is the principle of optimal adaptation, which states that species will evolve toward fitness to their environments. The distance to the ideal is minimized. A conservation principle associated with this is the Fischer Theorem of Natural Selection, which states that the rate of change in fitness is equal to the genetic variance. In cases where this minimum principle can be applied, it allows biologists to quantitatively predict the values of various biological parameters.
For example, sickle-cell anemia is a congenital disease controlled by a recessive gene. Individuals who inherit the gene from both parents are likely to die without reproducing, but individuals who inherit the gene from a single parent are resistant to malaria. In certain regions of West Africa 40% of the population carries the gene. From this fact and the principle of optimal fitness, it is possible to predict that the survival advantage of resistance to malaria is about 25% in these regions. This estimate fits well with measured data. Similar methods have been used to estimate the number of eggs laid by a bird, the shape of sponges, and the gait of animals at different speeds. But these examples of applying a minimum principle are not so crisp as those of physics. Why, for example, do we not evolve a non-lethal gene that protects against malaria? The answer is complicated, and the principle of fitness offers no help. It is useful in aiding our understanding, but it does not explain all. This is probably the kind of answer to Question IV for which we will have to settle.
Even in physics, knowledge of the exact law does not really explain all behaviors. The snowflakes and whirlpools of water are examples. The forces that govern the interaction of water molecules are understood in some detail, but there is no analytical understanding of the connection between these forces and their emergent behaviors of water.
On the other hand, our goal is not necessarily to understand, but to recreate. In both of the examples mentioned, conservation principles give us sufficient understanding to recreate the phenomena.
In order to achieve this kind of understanding for intelligence it will be necessary to ask and answer the kinds of questions that are mentioned above.
I do not know the answer to Question IV. It is possible that it will be very complicated and the interface between acquired and inherited intelligence will be difficult to reproduce. But it is also possible that it will be simple. One can imagine this would be the artificial substrate for thought.
Once this is achieved it will still remain to inoculate the artificial mind with the seed of knowledge. I imagine this to be not so different from the process of teaching a child. It will be a tricky and uncertain process since, like a child, this mind will presumably be susceptible to bad ideas as well as good. The first steps will be the most delicate. If we have prepared well, it will reach a point where it can sustain itself and grow of its own accord.
For the first time human thought will live free of bones and flesh, giving this child of mind an earthly immortality denied to us.
Dyson, Freeman. "The Origins of Life", Cambridge University Press, 1985.
Haldane, J. B. S. "The Causes of Evolution", Harper & Brothers, 1932.
Hillis, W. Daniel. "The Connection Machine", The MIT Press, 1985.
Luria, A. R. "Mind of the Mnemonist", Basic Books, 1968.
Newell, Allen. "Human Problem Solving", Prentice Hall, 1972.
Wolfram, Stephen. "Theory of Applications of Cellular Automata", World Scientific, 1986.
"Intelligence as an Emergent Behavior or, The Songs of Eden" reprinted by permission of Daedalus, Journal of the American Academy of Arts and Sciences, from the issue entitled, "Artificial Intelligence," Winter 1988, Vol. 117, No. 1. | <urn:uuid:35dbe957-2af2-401f-a713-aba8628a4ade> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.kurzweilai.net/intelligence-as-an-emergent-behavior-or-the-songs-of-eden",
"date": "2016-09-26T22:32:47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9601059556007385,
"token_count": 8977,
"score": 2.703125,
"int_score": 3
} |
The autonomic nervous system regulates functions in the body that are involuntary, such as blood pressure and digestion. This system malfunctions in LBD, causing problems with blood pressure regulation, urinary incontinence and constipation. Many studies have been done on autonomic dysfunction in Parkinson’s disease and multiple system atrophy, both of which also are characterized by the presence of Lewy bodies in the brain. Research demonstrates that autonomic dysfunction predicts a shorter survival time in these disorders. Little is known about how autonomic dysfunction affects survival in Lewy body dementias.
Dr. Kajsa Stubendorff of Lund University in Sweden and other researchers in Europe studied 30 individuals with dementia with Lewy bodies (16) and Parkinson’s disease dementia (14) in a prospective, longitudinal follow-up study. Three aspects of autonomic dysfunction were assessed: orthostatic hypotension (OH), urinary incontinence and constipation. OH is a form of low blood pressure occurring after a person stands up, often after prolonged rest. The change in position causes a temporary reduction in blood flow to the brain and symptoms which include dizziness, lightheadedness, blurred vision and fainting.
Participants’ blood pressure and heart rate were recorded at baseline, 3 months and 6 months. Blood pressure readings were taking in different positions: after the subject was lying down for at least 10 minutes, immediately upon standing up, and at one, three, five and ten minutes of standing. 83% had at least one measurement of OH and 50 percent had persistent OH over the course of the study. Urinary incontinence and constipation were assessed by asking questions of the patient and caregiver. 30 percent reported problems with urinary incontinence and 30 percent reported constipation requiring treatment.
Seven of the 30 patients died during the follow-up, five from the DLB group and two from the PDD group. Patients with persistent OH had a significantly shorter survival compared to those with no or non-persistent OH; there were, however, no differences in survival between those with or without constipation or urinary incontinence.
Subjects were divided up into three categories: those with no or mild OH (15), those with persistent OH but no urinary incontinence or constipation (7), and those with persistent OH, constipation and/or urinary incontinence. Patients in the third group had the shortest survival times, while those in the second group had the next shortest.
While people with OH may experience symptoms such as light-headedness, research shows that only 43 percent of non-demented patients with profound OH have typical symptoms. As such, orthostatic blood pressure measurements should be a routine part of monitoring all patients with Lewy body dementias. It is important to note that individuals with dementia may not show a significant drop in blood pressure until as late as 10 minutes after standing. Research indicates people with DLB have a more prolonged period of orthostasis after standing up than individuals with Alzheimer’s or health control subjects.
This is the first study to analyze how autonomic dysfunction affects survival in LBD. These results suggest that persistent autonomic dysfunction is possibly predictive of shorter survival time. Further research is needed to understand how autonomic dysfunction impacts other issues, such as quality of life, cognition, and activities of daily living.
This study first appeared in PLoS One in October, 2012. | <urn:uuid:cbf452d8-5d96-4c37-8dfd-00a418f594f2> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.lbda.org/content/longevity-may-be-impacted-orthostatic-hypotension-lbd",
"date": "2016-09-26T22:37:17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9551962614059448,
"token_count": 697,
"score": 2.6875,
"int_score": 3
} |
Use Counterfactual Thinking for a Creativity Boost
You always want creativity to come naturally but sometimes you need to give it a push. Thoughts and ideas blog The 99u suggests that counterfactual thinking can help give you a boost when you need it.
Counterfactual thinking, also known as asking, "What might have been?" has been shown to increase creativity for short periods of time. To experiment with this technique, take events that have already happened and re-imagine different outcomes, alternating between the subtractive mindset (taking elements out of the event) and the additive mindset (adding elements into the event).
A silly example of counterfactual thinking in action can be seen on The Big Bang Theory, when one of the main characters makes a game of the phenomenon, asking his roommate: "In a world where Rhinoceroses are domesticated pets, who wins the Second World War?" You, however, can apply it to more realistic scenarios, such as mapping out outcomes whenever you are doing creative problem solving, subtracting or adding "what if" elements that would have affected the outcome.
When you get stuck with a creative block, try asking yourself what might be. When you're forced to think of an answer, the uninhibited outcome may produce the creative ideas you need. For six more tips, check out the full post over the at 99u.
7 Ways to Boost Your Creativity | The 99u | <urn:uuid:7c49dbbb-6669-43ea-84e2-e77edb5f80d0> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.lifehacker.co.in/life/Use-Counterfactual-Thinking-for-a-Creativity-Boost/articleshow/20867444.cms",
"date": "2016-09-26T22:26:11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.946895182132721,
"token_count": 293,
"score": 2.6875,
"int_score": 3
} |
Click any word in a definition or example to find the entry for that word
a brief speech about a social problem that suggests a potential solution, usually taking the form of a very short, web-based video
'The winning tole-ranter will be prized with having their tole-rant's heartfelt solution put on the road to reality when global tolerance uses its global networks, relationships and communications expertise to catapult it onto the public agenda.'Global Tolerance 1st October 2010
'Click the photo to hear Sir Steve tole-rant about the bad press young people get …'myplacesupport.co.uk 2010
'Rather than ranting about everything that is wrong in the world, tole-ranters speak from the heart about social problems, and point to potential solutions – in 60 seconds.'Youth Leader Magazine 2nd September 2010
Ever felt so strongly about a principle that you would welcome the opportunity to 'get it off your chest' in front of a large group of listeners? Assisted by the magic of the Internet, you can now speak out on those issues you're so passionate about to a worldwide audience – courtesy of the new concept of a tole-rant.
a tole-rant is a lively, animated speech, it has none of the negative connotations of a rant, which is conventionally a long, loud, angry, complaint
A tole-rant (pronounced toll-uh-RANT) is a short speech about a social problem, usually taking the form of a 60 second video which features an individual's viewpoint and is spoken directly to a camera. Though a tole-rant is a lively, animated speech, it has none of the negative connotations of a rant, which is conventionally a long, loud, angry, complaint. A tole-rant is by contrast short, punchy and inherently constructive, pointing to a positive solution.
The creation of a tole-rant consists of the following steps:
1. hitting the record button on a mobile phone, camera or other video device
2. starting with the statement 'I'm tole-ranting about …' where the speaker briefly introduces the social problem they want to raise awareness of
3. an explanation of why the problem the speaker has identified means so much to them (relating something from personal experience is more likely to make an impact on the viewer) and
4. pointing towards and/or describing a possible solution.
All this should take place in a mere 60 seconds! Typical topics featured in tole-rants include poverty, inequality, terrorism and climate change.
If you'd like to see some examples of tole-rants, check out a dedicated website, where you can watch them in action, vote on your favourites and even upload your own. There have been several high-profile supporters of the concept of the tole-rant – among them Sir Steve Redgrave, five times Olympic Champion in rowing, who has tole-ranted about the bad press given to today's youth.
The term tole-rant is a clever play on the verb/noun rant, describing the concept of angrily complaining about something, and the adjective tolerant, meaning 'willing to accept someone's beliefs or way of life without criticizing them'.
The word was first coined in November 2009 when organization 'global tolerance' launched the concept to coincide with the United Nations International Day of Tolerance. Global tolerance is a communications agency which uses the power of the media to inspire positive social change, and has worked with a number of high-profile social activists, including the Dalai Lama and the Prince of Wales.
Tole-rant is used both as a noun and intransitive verb, and the related noun tole-ranter describes someone who participates. Predictably, the concept of the tole-rant has been galvanized by social media, and is particularly popular in social networking domains like Facebook and Twitter.
Read last week's BuzzWord. Nevertiree.
This article was first published on 8th November 2010. | <urn:uuid:f69025b6-d097-4b96-8395-ed0843834c08> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.macmillandictionary.com/us/buzzword/entries/tole-rant.html",
"date": "2016-09-26T22:28:04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9464803338050842,
"token_count": 831,
"score": 2.515625,
"int_score": 3
} |
Santa Claus at the Healey Asylum, Lewiston, c.1950
Contributed by Franco-American Collection
The Healey Asylum (Asile Healey) was an orphanage operated by the Sisters of Charity of Ste-Hyacinthe, a religious order from Quebec who were invited to Lewiston to help care for the Franco-American population.
The Asylum opened in May 1893 on the corner of Ash and Bates Streets in Lewiston to accommodate the boys under the care of the Sisters who had previously been living at their convent on Sabattus Street.
Although originally conceived as an orphanage, the Healey Asylum functioned more like a boarding school for needy families. Boys were cared for until the age of 12, as the sisters considered it improper to care for boys as they reached puberty.
The Asylum was named after Bishop James Healy (1875-1900), the second Roman Catholic Bishop of Maine, who donated $5,000 to its construction. In 1968, the orphanage became a daycare center; in 1970 it became part of the federal Model Cities program.
The sisters continued to work there until the Asylum closed in 1973.
- Title: Santa Claus at the Healey Asylum, Lewiston, c.1950
- Subject Date: circa 1950
- Town: Lewiston
- County: Androscoggin
- State: ME
- Media: Photograph
- Dimensions (cm): 30 x 21
- Object Type: Image
For more information about this item, contact:
USM, 51 Westminster Street, Lewiston, ME 04240
Cross Reference Searches
LC Subject Headings
Please post your comment below to share with others. If you'd like to privately share a comment or correction with MMN staff, please use this form. | <urn:uuid:f5438137-df7a-4129-ac8f-c695e1fa2820> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.mainememory.net/artifact/67544",
"date": "2016-09-26T22:53:50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9468441009521484,
"token_count": 365,
"score": 2.515625,
"int_score": 3
} |
In these last years, since the first unit testing frameworks were made available, and methodologies like TDD have become mainstream, unit testing is turning into a more popular strategy in software development.
The main advantages that unit testing can bring into a software development project can be summarised in mainly two purposes:
- Design purpose. Help programmers creating new code.
- Correctness purpose. Make sure that the behaviour of the different independent small parts of the newly created code are correct.
But what usually gets overlooked is that unit testing has also risks involved, specially when applied using dogmatic approaches like “every public method should be unit tested” or “everything needs to be design so that is easily unit tested”.
The main risk with unit testing appears when too many unnecessary tests are created. This risk becomes obvious when every time that the code is changed the time spent fixing tests is way too high, hence impacting in the productivity of the developers.
The key to effectively use unit tests is to find a balance between your tests and the amount of time you need to maintain them. When looking for this balance is important to remember a few principles to ensure that you write efficient unit tests.
1. Behaviour is the key element to test.
Focusing in testing behaviour is the key to produce a good unit test. This way if you refactor the logic inside a method without breaking its behaviour you should keep all the related tests passing.
2. Not every method is applicable for unit testing.
There are many dogmas in agile, and to have a unit test for each public method seems to be one. While is true that if possible a method should be unit tested, if you cannot test its behaviour, is better to leave it alone. (See point 1) Unnecessary unit tests are going to lock you down from changing your code for no good reason, (See point 5). These areas of your code that can´t be unit tested should be tested from an Integration or manual perspective. (See point 4)
Some clear examples of these types of methods are those that encapsulate calls to a framework, methods that loop through a list of items and then delegate to a different method, methods that log out…
3. Unit tests which are not useful anymore should be deleted.
Tests are created mainly in two fashions:
- After the code is completed: These tests are created to built in an automated check for correctness in the associated methods. Creating useless tests (See point 1 and 2) may be made by mistake, if so, you shouldn’t feel any shame of deleting these tests, or if possible to refactor them to focus on the behaviour.
- Before the code is completed. (Specially known for TDD). In these cases, tests have a second goal, to help programmers to come up with a cleaner code. After the code is complete, is usually a good idea to review the tests created and to delete/refactor them wherever applies. This unfortunately never made it to the list of steps to follow in TDD…
4. Unit tests will never substitute manual and integration testing.
Unit tests, once that your code is completed, help you diagnose weather the individual parts of your application are working as you expect. This is important, but is very far away of proving that your application is robust and works according to your customer expectations, which is your main goal.
Units tests are only a small part of the complete picture, you are going to need integration tests for areas of your code where unit tests can’t prove their behaviour, and you are going to need manual testing in areas where you can’t create an automated test, or for more abstract areas like, usability and UI testing.
5. Unit tests that lock down your code from changes are evil.
If there is one particular type of test to be avoided at any cost, these are the tests that lock down your from changes without adding any value. Let me illustrate this with some pseudo-code:
MyClass.MyMethod (magicParam1, magicParam2) START magicReturnValue = someOtherClass.doSomething (magicParam1, magicParam2) veryRemoteClass.stuff (magicReturnValue) managerOfManager.buzzinga(magicParam2) END
MyTestClass.MyMethodTest START when (someOtherClass.doSomething (magicParam1, magicParam2)). thenReturn (magicReturnValue) myClassToTest.MyMethod (magicParam1, magicParam2) verifyICalledThis (someOtherClass.doSomething (magicParam1, magicParam2)). andReturnedValueIs (magicReturnValue) verifyICalledThis (veryRemoteClass.stuff (magicReturnValue)) verifyICalledThis (managerOfManager.buzzinga(magicParam2)) END
What is the previous test achieving? Well, is actually achieving a lot… of pain… This is the one thing that sets me off when I see others people code, not only this is not proving anything about the expected behaviour of your code, but if someone refactors the main class maintaining the same logic, he will find that this test fails miserably, only because the code changed, not because there is any unexpected change of behavior in the code…
Funny thing about this type of tests, at least for my experience, is that they are usually most defended by the extremist agilist, everything must be unit tested… Have they perhaps forgot about their beloved agile process motto? | <urn:uuid:95fcba88-589c-476a-8966-f33c531ba1db> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.makinggoodsoftware.com/2011/12/15/how-to-write-efficient-unit-tests-5-principles-for-unit-testing/",
"date": "2016-09-26T22:24:33",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9256803393363953,
"token_count": 1137,
"score": 3,
"int_score": 3
} |
create a plot of one or more polygons
polygonplot3d(v1, v2, v3, options)
list of polygon vertices, each given as a 3-element list
n by 3 Matrix, where n is any positive integer
v1, v2, v3
Vectors, all of the same length
(optional) equations of the form option=value, where option is any of the available plot options
The polygonplot3d command is used to create a 3-D plot of a polygon. The polygon's vertices are provided as the list L, the Matrix A, or the Vectors v1 and v2.
The list L must contain 3-element lists or Vectors [x, y, z], each representing the numeric x-, y- and z-coordinates of a vertex.
The Matrix must be n by 3, where n is any positive integer. Each row of the Matrix contains the x-, y- and z-coordinates of a vertex. If a 3 by n Matrix is given, with n not equal to 3, then it will be automatically transposed. The Vectors, representing the x-coordinates, y-coordinates and z-coordinates respectively, can have any length, but all must have the same length.
Remaining arguments are interpreted as options which are specified as equations of the form option = value. These options are the same as those available for the plot3d command, as described in plot3d options.
Multiple polygons may be plotted by providing a list containing polygons in the list or Matrix form, as described above. In this case, the color option value can be a list of n colors, where n is the number of polygons.
another_poly ≔ seqcosπT40,sinπT40,T40,T=0..40:
list_polys ≔ seqseqT10,S20,sinTS20,T=0..20,S=1..4:
Download Help Document | <urn:uuid:2a606c83-57e4-481b-852d-3c39c76988af> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.maplesoft.com/support/help/MapleSim/view.aspx?path=plots/polygonplot3d",
"date": "2016-09-26T22:29:46",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.7636680006980896,
"token_count": 452,
"score": 3.625,
"int_score": 4
} |
In recent years, the relevance of religious studies has grown steadily, as our globalized society grapples with challenges related to health, environmental degradation, war, and the changing perceptions of our own origin. Religion is recognized as a vital dimension of society and requires the best of our critical insight in order to appreciate its impact.
McGill’s School of Religious Studies recognizes the crucial role played by religion and brings the highest level of academic excellence to the study and analysis of the world’s religions as phenomena of human society. The School takes a multi-disciplinary approach to religious scholarship, incorporating perspectives from history, sociology, anthropology, philosophy, politics, and literature to enrich students’ appreciation of the extraordinary richness of their religious heritage and the diversity of contemporary religious expression worldwide.
McGill’s reputation as a leading centre for religious studies is enhanced by the School’s team of world-class scholars—experts in world religions and cultures and their impact on social, political, educational, and health issues.
Clockwise from top left:
(1) Prof. Davesh Soneji has been working with communities of Hindu temple dancers and courtesans for more than a decade. He is shown here with the late Maddula Venkataratnam and her large matrifocal kinship network in Tatipaka village, Andra Pradesh, South India.
(2) Students from the School of Religious Studies working at the Tel Dan archaeological site in Israel. This research trip was led by Professor Patricia G. Kirkpatrick.
(3) Professor Lara Braitstein at the Boudhanath Stupa in Kathmandu, Nepal, where she spent time studying Indo-Tibetan Buddhism.
(4) The Column of Marcus Aurelius in Rome, where Professor Ellen Aiken studies the Roman imperial context of ancient Christianity.
Asking the right questions
McGill’s School of Religious Studies is one of the first places in North America where the world’s religions have been studied rigorously in a university setting, alongside professional training for ministry. Not only does this provide an ideal venue for fostering the methods and interpretive framework so critical to the study of religion; it also nurtures an appreciation for the perspectives and practices within religious traditions.
Considerable attention has been given over the years to the role of religion in civil society, explored through a global perspective. In teaching and research, the School has engaged crucial questions regarding public policy, law, ethics, gender, pluralism, and the environment.
The vision and generosity of the late William and Henry Birks have played a pivotal role in the School’s growth. These two benefactors believed that the study of religion and theological cultures and the academic training of clergy were best carried out in a university setting. They had a comprehensive view of religious studies—one that took in the religions of the whole world—and they believed that such an approach would help to address today’s and tomorrow’s questions.
The Centre for Research on Religion (CREOR)
The School’s expertise in world religions and cultures—ranging from Christianity and Judaism to Hinduism and Buddhism—along with McGill’s Institute of Islamic Studies and Department of Jewish Studies in the Faculty of Arts—have provided fertile ground for the creation of a Centre for Research on Religion (CREOR).
The Centre, established in 2005, serves as a broad academic platform to coordinate and support interdisciplinary and interfaculty research on the identities of the world’s religions. It brings prominent scholars from fields such as anthropology, the classics, education, law, medicine, philosophy, psychology, and sociology and those studying the world’s religions together in small, targeted research groups to discuss and debate issues of present-day relevance.
Many members of CREOR are internationally renowned experts in social and religious changes in the fields of health care, human rights, minority politics and public policy. This initiative has fostered collaboration among researchers at McGill and further afield through scholarly meetings and colloquia that have in turn provided impetus for interdisciplinary research projects. | <urn:uuid:e6a7b051-b61f-4193-8316-2e8b8626ea7a> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.mcgill.ca/religiousstudies/about/support/impact",
"date": "2016-09-26T22:57:54",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.937861442565918,
"token_count": 847,
"score": 2.609375,
"int_score": 3
} |
An antioxidant to protect mitochondrial function stopped an MS-like syndrome in mice, researchers found. Also this week: getting closer to an insulin pill and a new approach to flu treatment.
Antioxidant Protects Nerves in MS
A commercially available antioxidant that targets mitochondria, MitoQ, may provide neuroprotection against the central nervous system ravages of multiple sclerosis, a mouse study suggested.
Mitochondrial dysfunction and the accumulation of reactive oxygen species have been implicated in the axonal damage of MS, so a group of researchers from Oregon Health and Science University in Portland led by P. Hemachandra Reddy, PhD, administered MitoQ (mitochondrial CoQ10) to mice with experimentally induced autoimmune encephalomyelitis, a common model for MS. Clinical and behavioral symptoms were both delayed and attenuated in the treated animals, whereas neurologic disabilities including limb paralysis developed within 2 weeks in untreated controls.
Further analyses revealed that MitoQ treatment reduced inflammation in the central nervous system and spine and helped preserve neurons against demyelination and the cytotoxic events that can ensue. These findings support MitoQ as a "promising neuroprotective treatment for patients with MS," the researchers wrote in Biochimica et Biophysica Acta.
-- Nancy Walsh
Insulin Pill Within Reach?
A multipronged approach to protect insulin as it travels through the digestive system could allow delivery in pill form, a study suggested.
A way to deliver insulin without daily injections has been sought for decades, but insulin is easily broken down by digestive enzymes and not well absorbed from the gut into the bloodstream.
Inhaled insulin and transdermal insulin have their own issues, Sanyog Jain, PhD, of India's National Institute of Pharmaceutical Education and Research, and colleagues noted.
To clear those hurdles, the researchers packaged insulin in liposomes then wrapped them in polyelectrolyte layers for protection. Finally, they attached folic acid to boost transport across the intestinal wall into the blood.
In rats, the delivery system lowered blood glucose levels nearly as well as injected insulin but kept them down longer, up to 18 hours, the group reported in Biomacromolecules.
-- Crystal Phend
Novel Drugs Tame Chagas
A new class of compounds might effectively treat Chagas disease, in vitro and mouse studies suggested.
Caused by a parasite called Trypanosoma cruzi, Chagas disease is endemic from the southwest U.S. down into South America, although it is considered an emerging threat in the rest of the U.S., Europe, Japan, and Australia. The disease can be treated effectively in the acute phase by benznidazole, but the drug is not effective during the chronic phase.
Momar Ndao, DVM, PhD, of McGill University in Montreal, and colleagues developed new compounds called reversible cysteine protease inhibitors that disrupt an enzyme -- cruzipain -- that is involved in a number of essential parasitic functions.
After demonstrating effectiveness in in vitro studies, the researchers tested the two most effective compounds versus benznidazole in mice. All groups showed significant reductions in parasite burden in the blood, heart, and esophagus, but cure rates for acute infections were higher with the two newer agents (90% and 78% versus 71%).
"The efficacy shown in these T. cruzi murine studies suggests that nitrile-containing cruzipain inhibitors show promise as a viable approach for a safe and effective treatment of Chagas disease," Ndao and colleagues wrote in Antimicrobial Agents and Chemotherapy.
-- Todd Neale
Antisense for AAT Deficiency
An antisense oligonucleotide drug may be a viable treatment for treat alpha-1-antitrypsin (AAT) deficiency, according to researchers at Isis Pharmaceuticals.
Writing in the Journal of Clinical Investigation, the group said they had developed an antisense sequence that blocked expression of human AAT in mouse and monkey models. That may seem like an odd approach to a deficiency syndrome, but many cases of AAT deficiency involve a defective protein that forms toxic aggregates in the liver. The only current treatment in such cases is liver transplant.
The prototype anti-AAT oligonucleotide reported in JCI halted progression of liver disease in the mouse model with short-term administration. Chronic treatment led to actual reversal of liver disease, the researchers claimed.
AAT expression in cynomolgus monkeys was greatly reduced with the same agent, "demonstrating potential for this approach in higher species," the Isis group wrote.
-- John Gever
Toys Can Harbor Bugs
Streptococcus pyogenes and Streptococcus pneumonia are important causes of morbidity and mortality, but investigation has suggested they die rapidly in the environment. That may not be entirely true, however, according to researchers led by Anders Hakansson, PhD, of the State University of New York at Buffalo.
Earlier research used broth-grown planktonic populations of the organisms -- single cells that swim in a liquid medium, they noted in Infection and Immunity. But both organisms are thought to colonize people in the form of biofilms, which they report remained infectious in a mouse model.
Perhaps more important for day-to-day living, they performed direct bacteriologic cultures of items in a day-care center and found high levels of viable streptococci of both species. Indeed, four out of five stuffed toys tested positive for S. pneumoniae and several surfaces, such as cribs, tested positive for S. pyogenes even after being cleaned. The tests took place just before the center opened in the morning -- hours after the last human contact.
The findings "should make us more cautious about bacteria in the environment since they change our ideas about how these particular bacteria are spread," Hakansson said in a statement.
-- Michael Smith
Counter-Countermeasures for Fighting Flu
Antibodies targeting the T-cell inhibiting protein PD-1 (programmed cell death receptor) may inhibit flu infections, according to research from Emilio Flano, PhD, of Nationwide Children's Hospital in Columbus, Ohio, and colleagues.
Some viruses, including the influenza virus, exploit this receptor to dial down the host's immune response. Blocking it could therefore thwart this effort and allow T- cell responses to remain active and vigorous.
A mouse model of infection showed that flu viral load dropped within 3 days of intranasal administration of a PD-1 ligand activity-blocking antibody, they wrote online in the Journal of Virology.
Viral levels were undetectable at 5 days. A human cell culture model of viral infection showed a similar reaction to the antibody.
However, the researchers noted that past attempts to blockade PD-1/ligand were not effective in humans for treating hepatitis, HIV, and other persistent viral infections.
-- Cole Petrochko | <urn:uuid:f8c6c6be-23d4-46cd-b7aa-e4456a7e60ac> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.medpagetoday.com/LabNotes/LabNotes/43601",
"date": "2016-09-26T22:28:34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9353321194648743,
"token_count": 1441,
"score": 2.734375,
"int_score": 3
} |
By Dan Quisenberry
Recent student testing in Michigan showed something shocking: fewer than 50 percent of students in grades 4, 7 and 11 are proficient in writing, and the proficiency rates in subjects like reading, mathematics and science aren’t what they should be either.
We know a few things about this statistic. First, each and every day, teachers across Michigan, in all types of schools, are putting our kids first, devoting their lives to educating our children and still too many students are unprepared. Second, significant problems exist, often having much more to do with the problems of adults than children, which are leaving parents with too few options and their children trapped in schools that simply do not measure up. And third, students being failed, as adults argue, never get back this missed opportunity.
There are, however, bright spots on Michigan’s educational landscape. Michigan’s charter public schools are providing parents with desperately needed choices for their children’s education, and charters, by their nature and design, are improving and implementing innovative approaches like Governor Snyder has outlined that teach children, empower parents and better manage schools.
Though the MEAP is an imperfect measure, recently released test scores show many charter public schools are helping families across the state by providing quality choices, even in Michigan’s toughest school districts.
In Flint, Grand Rapids, Lansing and Detroit, students in charter schools are succeeding, earning higher scores on the MEAP math and reading tests than their traditional public school counterparts. For example, the traditional Grand Rapids Public Schools produced only a 65.4 percent proficiency rate on the MEAP reading tests. Students at charter public schools in Grand Rapids achieved an 81.1 percent proficiency rate.
In Detroit, the numbers are even more astounding. The MEAP measures a school’s effectiveness by measuring students’ performance in math, reading, writing, social studies and science. Across 18 unique MEAP tests taken by students in grades 3 through 9, Detroit’s tremendous charter schools dramatically outperformed the Detroit Public Schools on every single one.
And in largely minority districts like Grand Rapids and Detroit, charters continue to shine, with proficiency rates among African-American students in charter public schools coming in six points higher than the statewide average in traditional public school and outpacing traditional public schools each of the last six years.
These remarkable results come because charters promote an atmosphere of innovation in education. They are held accountable and demand accountability. They open their doors to students everywhere, including special education students and are legally prohibited from “cherry picking” students. They embrace and thrive on the competition that empowers parents. They operate more efficiently and stretch each public dollar more because of the fact they receive $1,329 less per student per year than their traditional public school counterparts. And they see student achievement as the result of high expectations.
Our students’ achievement is proof that innovation in education is happening in charter schools across Michigan and that it is working. The long waiting lists at two-thirds of the state’s charter schools prove that when given a choice, parents will seek out what’s best for their kids. Parents and students deserve that choice. Governor Snyder and the legislature should act this year to lift the cap on charter schools and ensure they have it.
Dan Quisenberry is president of the Michigan Association if Public School Academies in Lansing. | <urn:uuid:d3900b93-e974-4f9c-aaa4-97d5da9454c5> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.mlive.com/opinion/grand-rapids/index.ssf/2011/06/guest_column_charter_schools_a.html",
"date": "2016-09-26T23:12:52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9634390473365784,
"token_count": 695,
"score": 2.875,
"int_score": 3
} |
The latest news from academia, regulators
research labs and other things of interest
Posted: July 4, 2008
Some fundamental interactions of matter turn out to be fundamentally different than thought
(Nanowerk News) Collisions have consequences. Everyone knows that. Whether it's between trains, planes, automobiles or atoms, there are always repercussions. But while macroscale collisions may have the most obvious effects—mangled steel, bruised flesh—sometimes it is the tiniest collisions that have the most resounding repercussions.
Such may be the case with the results of new experimental research on collisions between a single hydrogen atom and a lone molecule of deuterium—the smallest atom and one of the smallest molecules, respectively—conducted by a team led by Richard Zare, a professor of chemistry at Stanford University.
When an atom collides with a molecule, traditional wisdom said the atom had to strike one end of the molecule hard to deliver energy to it. People thought a glancing blow from an atom would be useless in terms of energy transfer, but that turns out not to be the case, according to the researchers.
"We have a new understanding of how energy can be transferred in collisions at the molecular scale," said Zare, senior author of a paper presenting the results in the July 3 issue of Nature.
Every atom or molecule, even if it has no charge, has electrostatic forces around it—sort of like the magnetic field of the Earth. Those chemical forces exert a pull on any other atom or molecule within range, trying to form a chemical bond.
What Zare and his team found is that a speeding hydrogen atom does not have to score a direct hit on a deuterium molecule, a form of molecular hydrogen made up of two heavy isotopes of hydrogen, to set the molecule vibrating. It only needs to pass closely enough to exert its tiny chemical force on the molecule. Vibrating molecules matter because they are more energized, making them more reactive. Thus, energy transfer effectively softens them up for future reactions.
"This has changed a very simple idea that we cherished—that to make a molecule highly vibrationally excited, you basically had to crush it, squeeze it, hit it over the head. Compress some bond and the molecule would snap back," Zare said. "We found quite the opposite."
One could compare it to the difference between a punch in the stomach and a caress on the cheek. Both can set the senses tingling, but in very different fashions.
Zare's team discovered that as a hydrogen atom passed close to a deuterium molecule, the chemical forces tugged on the nearest of the deuterium atoms in the molecule, pulling it away from the other deuterium atom. But if the tug was not strong enough to break the two deuterium atoms apart, as the hydrogen atom moved farther away its hold on the deuterium atom would weaken. The deuterium atom would eventually slip from its grip and snap back toward the other deuterium atom, initiating an oscillation, or vibration.
"What we are really seeing is the result of a frustrated chemical reaction," Zare said. "The molecule wants to react. It just didn't get into the right position with the right conditions so that it could react."
Zare went on to picture this process as follows: "The deuterium molecule is in a happily married state until the hydrogen atom flies by and attracts the nearest deuterium atom. This deuterium atom in the middle is in a giant tug of war. It is being fought over by two lovers, two highly similar atoms that are both attracted to the middle deuterium atom. This affair is a love triangle. In energy transfer, the original spouse wins out. The middle deuterium atom decides not to stray and rebounds to the other deuterium atom—its first love—setting both to vibrate rapidly."
The new findings may have ramifications for understanding what happens in any chemical reaction, in addition to interactions between chemicals that do not result in a reaction but instead result in energy transfer. So far, one instance has been discovered, but Zare believes that this behavior is likely to be found in many other collision systems.
"This is very fundamental stuff as to what happens in transformations of matter from one state to another," Zare said. "It's very fundamental chemistry."
Comparing the ramifications of the new findings to a ripple spreading out from a pebble dropped into a pond, Zare said, "Maybe this will be the sound of one hand clapping, if the ripple doesn't go anywhere. Taken together, the only way we advance is making these ripples and following them as they spread outward."
Zare's group did the experiments that revealed the energy transfer occurring during "soft" collisions between the hydrogen atom and the deuterium molecule by using techniques and equipment for measuring the molecular interactions that had previously been developed in his laboratory. The experimental work is a major portion of the doctoral thesis of his graduate student Noah T. Goldberg, who was assisted in these measurements by Jianyang Zhang, a postdoctoral researcher, and graduate student Daniel J. Miller. The theoretical calculations that provided the model used to explain the observations is the result of work done by co-authors Stuart Greaves of the University of Bristol and Eckart Wrede of the University of Durham, both in Britain.
The research done at Stanford was funded by the National Science Foundation. The research done in Britain was funded by the Engineering and Physical Sciences Research Council. | <urn:uuid:f35aae4e-3735-4a93-9eb6-c48c4c0775c4> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nanowerk.com/news/newsid=6288.php",
"date": "2016-09-26T22:33:39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.953861653804779,
"token_count": 1140,
"score": 3.734375,
"int_score": 4
} |
Connect With the Skies
NASA's Meteoroid Environment Office
The NASA Meteoroid Environment Office (MEO) is the NASA organization responsible for meteoroid environments pertaining to spacecraft engineering and operations.
› View Site
Stay 'Up All Night' to Watch the Perseids!
Editor's note: This event is closed.
The annual Perseid meteor shower peaked on the night of Aug. 11-12. Rates can get as high as 100 per hour, with many fireballs visible in the night sky. Early in the evening, a waxing crescent moon will interfere slightly with this year's show, but it will have set by the time of the best viewing, just before dawn. The best opportunity to see Perseids is during the dark, pre-dawn hours of Aug. 12.
How to See Perseid Meteors
For optimal viewing, find an open sky because Perseid meteors come across the sky from all directions. Lie on the ground and look straight up into the dark sky. Again, it is important to be far away from artificial lights. Your eyes can take up to 30 minutes to adjust to the darkness, so allow plenty of time for your eyes to dark-adapt.
About the Perseids
The Perseids have been observed for at least 2,000 years and are associated with the comet Swift-Tuttle, which orbits the sun once every 133 years. Each year in August, the Earth passes through a cloud of the comet's debris. These bits of ice and dust -- most over 1,000 years old -- burn up in the Earth's atmosphere to create one of the best meteor showers of the year. The Perseids can be seen all over the sky, but the best viewing opportunities will be across the northern hemisphere. Those with sharp eyes will see that the meteors radiate from the direction of the constellation Perseus.
Do You Have Photos of Perseid Meteors?
If you have some stellar images of the Perseid meteor shower, please consider adding them to the Perseid Meteors group in Flickr. Who knows - your images may attract interest from the media and receive international exposure. | <urn:uuid:74cf2fa2-3f0c-4dbf-ba4f-a5d00a3a2bce> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nasa.gov/connect/chat/perseids_2013a.html",
"date": "2016-09-26T22:49:30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9019837975502014,
"token_count": 439,
"score": 2.90625,
"int_score": 3
} |
The whale shark is the largest known fish. It reaches 20 m (66 ft.) in length and weighs 20 t (22 tn.).
It is mainly solitary in nature, but it can also be found in groups of more than 100 individuals. Despite its impressive appearance, it is harmless to humans. Scuba divers and underwater swimmers have clambered unmolested over its body.
The whale shark feeds chiefly on plankton, but also consumes sardines and anchovies.
They range all tropical ocean waters, and infrequently stray into temperate ones. | <urn:uuid:a786b7d6-d546-4829-80f2-b5162eb4d220> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nature.ca/notebooks/english/wlshark.htm",
"date": "2016-09-26T22:35:07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9605587124824524,
"token_count": 116,
"score": 2.8125,
"int_score": 3
} |
A cast of characters throughout the ages have protected this wild and mighty river. Meet some of the most notable.
In 1791 famed naturalist William Bartram published an account of his journey through eight southern colonies including a visit to the Altamaha. While there he documented the rare Franklin tree which is now extinct in the wild; all cultivated specimens descend from seeds he collected.
The Robert W. Woodruff Foundation has provided significant funding to protect the Altamaha. “The Conservancy helped us appreciate the significance of the Altamaha more than 20 years ago. Thanks to their persistent efforts, future generations will enjoy this river in perpetuity.” - Russ Hardin, Foundation president
Before, during and after becoming President of the United States, Jimmy Carter personally and politically supported preserving Georgia’s natural heritage including the Altamaha River.
Without decades of vision from the Georgia Department of Natural Resources, protecting the Altamaha would have been impossible. From leaders like Noel Holcomb, Mike Harris, Steve Friedman and Mark Williams and countless field staff, the people of the DNR are tenacious and dedicated.
The first female vice-chairman of the Conservancy’s national Board of Trustees, political powerhouse and courageous conservationist Jane Yarn took out a personal loan in 1969 to buy an island in the mouth of the Altamaha and then helped protect many other special places in Georgia.
The strong and steadfast Tavia McCuean served as state director for the Conservancy in Georgia for two decades. It was her foresight that led to early land acquisitions that now anchor a 42-mile long protected corridor.
Who are the heroes of the Rafinesque’s big-eared bats, the swallow-tailed kites, and the Eastern indigo snakes? The trees – the ancient cypress, bottomland hardwoods and longleaf pines – and those who work to protect these special forests.
Bill Haynes grew up on the banks of Black Island Creek. He organized The Rally to Save the Altamaha, a grassroots effort with fishermen, shrimpers, farmers and others that helped prevent changes like channelization that could have been the end of this great river as we know it.
Carrying forth the legacy of strong women working to defend this river, over her 16 year career with the Conservancy Alison McGee has helped protect and care for some of the most iconic areas around the river including Moody Forest.
“The Altamaha is irrepressibly and exotically beautiful,” writes Janisse Ray. This Georgia-born author has written about the Altamaha and surrounding forests in her memoirs and other publications, raising the profile of this wild river. | <urn:uuid:d299b2e3-a0ba-408e-bfed-1e1f6df5d8da> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nature.org/ourinitiatives/regions/northamerica/unitedstates/georgia/placesweprotect/georgia-the-heroes-of-the-altamaha.xml",
"date": "2016-09-26T22:30:30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9411225318908691,
"token_count": 557,
"score": 3,
"int_score": 3
} |
“The Early Republic and Indian Country, 1812-1833” is a four-week school teacher institute for twenty-five participants on the interactions between Native Americans and European Americans in the early nineteenth century. From the end of the American Revolution to the 1830s, the region between the Appalachian Mountains and the Mississippi River was the site of interactions between white settlers and Native Americans and of conflicts over land, power, and governance that erupted during the War of 1812. The institute brings recent scholarship to bear on this period, supplemented by maps and documents from the excellent collections at the Newberry Library’s D’Arcy McNickle Center for American Indian and Indigenous Studies. The center’s director, Scott Stevens, and Frank Valadez of the Chicago Metro History Education Center direct the institute, with Ann Durkin Keating of North Central College as lead scholar. Participants first explore the reactions of Native groups to incursions by the French, British, and Americans from the late 1700s through 1810, and readings are drawn from Daniel Richter’s Facing East from Indian Country and Richard White’s The Middle Ground. They then consider ways that Indian groups incorporated Euro-American trade and culture into their societies, led by Susan Sleeper-Smith of Michigan State University. Readings for this second week include selections from Sleeper-Smith’s Indian Women and French Men, Theda Perdue’s “Native Women in the Early Republic,” and Richard White’s “The Fiction of Patriarchy: Indians and Whites in the Early Republic.” In the third week, participants focus on Native resistance and the War of 1812 with R. David Edmunds of the University of Texas at Dallas, reading excerpts from Edmunds’s Tecumseh and the Quest for Indian Leadership and Joel Martin’s Sacred Revolt. Finally, John Hall of the University of Wisconsin leads an examination of Indian removal after the War of 1812, with readings from Hall’s Uncommon Defense and Kerry Trask’s Black Hawk. Primary source materials include items such as an 1836 “Map of the Sites of Indian Tribes of North America,” letters of U.S. Indian agents, and the records of fur traders, which will bring the past to life. The group also visits the Field Museum, the Mitchell Museum, and the Chicago History Museum. | <urn:uuid:a703ab5b-b4db-4194-b154-9910e14b8407> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.neh.gov/print/4876",
"date": "2016-09-26T22:30:35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9242053627967834,
"token_count": 495,
"score": 3.140625,
"int_score": 3
} |
Just like different skin types, different ethnicities also come with their own set of skincare problems.
Darker skin, in general, contains more melanin than lighter skin.
Here are some of the more specific skincare issues that different ethnicities are likely to face.
The most common skincare issue that south Asian women suffer from is hyperpigmentation, with Asia alone accounting for 37 per cent of the overall worldwide sales of skin brightening products.
Melanocytes are cells that produce melanin to protect the skin from the sun or ageing. This increased production of melanin leads to the appearance of hyperpigmentation.
The bad news is that once your skin's been affected by pigmentation it can be hard to fix.
'Correction is very difficult', explains Candice Gardner, education manager at the International Dermal Institute.
'However, it is possible to prevent existing pigmentation from becoming worse by ensuring that your skin is consistently protected from the sun's rays and applying sunscreen daily.'
Dark circles under the eyes are also an ongoing skin concern for many south Asian women.
'Under eye circles are usually down to genetics, but they can also be a result of sun damage', explains leading dermatologist Dr Howard Murad.
'To reduce the darkness or help prevent it, I'd recommend using an eye cream with SPF daily.'
Afro-Caribbean skin produces the most melanin, which gives the skin colour and also acts as a barrier to help protect the skin.
But Candice warns that this doesn't necessarily mean you can skip the sunscreen. 'Don't assume that because your skin is darker and produces more melanin that it cannot be sensitised.
'Often the sensitivity can be felt as the skin is hot, rather than seen, as the redness might not be visible.'
People with Afro-Caribbean skin are also likely to suffer from dermatosis papulosa nigra (DPN).
'These are small, black, protruding spots that usually appear on the face and neck', explains Dr Murad.
'Although the cause of it is unknown, protection from the sun and applying sunscreen daily can help prevent it.'
Sufferers of DPN can also opt for surgical solutions, such as curettage or cryotherapy to remove the spots.
However, these treatments carry a risk of scarring and skin discolouration.
East Asian skin has more melanocytes than fairer skin, which protects the skin from sun damage.
But it is also prone to suffering from brown spots and uneven skin tone.
'Oriental people are more susceptible to pigmentation and, because of their lighter skin tone, it tends to show up more', says Dr Murad.
'Oriental skin is also more sensitive to inflammation so you need to be careful when choosing skincare products and treatments because anything too harsh, like a chemical peel, could cause irritation and will only inflame the skin more.'
East Asian people are also more prone to getting sebaceous keratosis, which is similar to DPN and usually appears around the eyes.
'Hispanic skin is very similar to oriental skin', explains Dr Murad.
'However, even though you are less likely to suffer from sebaceous keratosis, the lighter skin tone can mean that pigmentation and dark spots show up more.'
Melasma, which causes dark patches to appear around the face, is also a common problem amongst Hispanic women.
This is usually triggered by pregnancy, but it can also occur with age.
Other contributing factors are genetics, hormone levels and UV exposure.
In this case, prevention is better than cure and Dr Murad suggests taking extra precautions when out in the sun.
'Applying a sunscreen with a SPF factor of at least 15 can help prevent dark spots and skin conditions like melasma from appearing or becoming worse.'
Mixed race skin differs vastly from person to person, and skin issues can vary depending on how light or dark your skin is.
'Over time, I've seen an increasing number of clients come to me with mixed race skin', says Dr Murad.
'Dark skin is usually more pigmented and oily and light skin is the opposite, which means mixed race skin can be a combination of both.'
Because of this, mixed race skin can also suffer from uneven skin tone and dark patches around the face, all of which can be avoided through proper sun protection and regular exfoliation.
The good news
Although people with ethnic skins are prone to suffering from pigmentation and dark patches, dermatologists and skincare specialists all agree that the darker your skin is, the slower it will age.
'The higher level of skin pigment (melanin) in dark skin is the primary factor in defending the skin from ageing UV light from the sun', explains Candice.
'Secondly, dark skin produces a higher level of lipids (natural protective oils), meaning the skin has a more compact, denser protective barrier.
'And thirdly, the pH is more acidic, which means skin barrier recovery is quicker.
'The result is skin that copes well with its environment and will show signs of ageing at a much slower rate than compared to Caucasian skin.'
The bottom line
Sun exposure is the main cause of a lot of skincare problems – and it can also aggravate and alleviate existing problems, such as pigmentation and melisma.
Taking precautions when out in the sun is essential to keeping your skin safe and healthy, so remember to apply a sun cream with an SPF factor of at least 15 every day.
Want to know more about ethnic beauty?
Ethnic skincare: Just like different skin types, different ethnicities also come with their own set of skincare problems.
Ethnic suncare: One of the biggest skincare myths is that people with darker skin do not need much or any protection from the sun.
Anti-ageing plan for all ethnicities: Ageing, wrinkles and crow’s feet are an unavoidable fact of life, but for most ethnic people this is something they will only face much later in life. | <urn:uuid:e29d50d3-cf56-4f8b-a32e-cd137e6d92ed> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.netdoctor.co.uk/beauty/skincare/a8992/ethnic-skincare/",
"date": "2016-09-26T22:28:14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9454243779182434,
"token_count": 1277,
"score": 2.640625,
"int_score": 3
} |
Appropriately for a month that concludes with a holiday designed around scary things, October has been declared Cyber Security Awareness Month. President Obama recently signed a proclamation and urged everyone to back up files, keep Internet-surfing children safe, and "play an active role in securing the cyber networks we use every day."
National Cyber Security Awareness Month is part of a campaign organized by the National Cyber Security Alliance (NCSA) and backed by the Department of Homeland Security. The government agency said, "America's competitiveness and economic prosperity in the 21st century will depend on effective cybersecurity."
STOP | THINK | CONNECT
NCSA said October's designation is part of the first Global Online Safety Campaign, called STOP | THINK | CONNECT, which began Monday. The public-private partnership is intended to "help all digital citizens employ universal behaviors to protect themselves," the organization said.
Several companies have initiated specific security-related measures in support of the month.
Digital security firm McAfee, for instance, announced Monday it will expand its initiative to fight cybercrime. The McAfee initiative includes an Online Safety for Kids program, in which its employees and partners volunteer to teach schoolchildren about safety and security online. It also made a cybercrime grant to the National White Collar Crime Center to train more law-enforcement personnel to detect, investigate and arrest lawbreakers.
The cybersafety education program was piloted last year in more than 100 schools, and the company reports it has reached more than 3,000 children. It's being expanded this fall to more schools in the U.S., as well as to other countries.
The initiative was originally announced by McAfee two years ago this month. It includes awards to individuals and organizations, an online resource portal, and an advisory council.
Security software provider CyberDefender has issued guidelines to keep families safe. The recommendations suggest that families set up separate user accounts for shared computers, make sure antivirus security programs are up to date, set up specific times each week to do virus scans on every PC, and talk among family members about smart computing practices.
Other tips from CyberDefender suggest using parental controls, keeping the security software suite running at all times, and calling a technician when problems arise, as opposed to only using software tools.
Visa said it will mark the month with a new web site to help cardholders and small businesses protect account data and avoid scams. It noted that a study from Javelin Strategy & Research found that more than 50 percent of consumers see the responsibility of protecting financial accounts as shared between users and the companies or institutions.
Some tips from the credit-card company include looking for the padlock icon in a browser's status bar and an "s" after "http" in the URL when exchanging confidential information online. Users can also activate "Verified by Visa" to add extra protection during online checkouts, and the company pointed out that Visa never calls users for private account information.
Posted: 2010-10-15 @ 11:19am PT
Posted: 2010-10-14 @ 12:14pm PT
more info needed
Posted: 2010-10-14 @ 12:12pm PT
Posted: 2010-10-14 @ 9:39am PT
u lost me - explain with more information
Posted: 2010-10-14 @ 6:33am PT
its the right thing to do... | <urn:uuid:6e67b392-b7e6-4792-80d7-a4cefea382c7> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.newsfactor.com/story.xhtml?story_id=012001D187R0",
"date": "2016-09-26T23:24:43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9446324110031128,
"token_count": 697,
"score": 2.765625,
"int_score": 3
} |
The mainstream media applauded the U.S. federal "vaccine court"'s decision Feb. 12 that the MMR vaccine and vaccines containing ethyl mercury as a preservative did not cause autism in three children chosen as test cases. But that's not enough to repair the damage already done to the U.S. vaccine program.
It's hard for a single court decision to compete with ongoing allegations from grieving parents and celebrities that vaccines created an epidemic of autism. Those allegations have generated confusion and fear in the minds of many young parents, reduced public trust in the remarkable benefits and safety of U.S. immunization programs and put both vaccinated and unvaccinated children at increased risk from preventable diseases. Furthermore, significant unanswered questions about the safety of vaccines have been documented by the Institute of Medicine and the National Institutes of Health. For example, are some few individuals genetically more susceptible to adverse reactions from certain vaccines? A more common worry among parents is "Are too many vaccines given too soon?"
Parents of newborn infants can't take two years of study, as did the vaccine court, to sort out sound science from junk, innuendo and unsubstantiated allegation. As a result, rates of vaccine refusal have climbed to levels allowing clustered outbreaks of vaccine-preventable diseases such as measles, pertussis and meningitis, posing a threat to those unvaccinated because of medical contraindications, age and parental choice. For example, in Washington, statewide refusal rates now exceed 5 percent, including rates exceeding 15 percent in some counties. Other states show doubling rates. Also worrisome is the disproportionate amount of time pediatricians must now spend to assure fearful parents that vaccination is the best choice for their child. At what level will the growing refusal rates put us at risk of major epidemics?
What has been missing in order to give parents confidence that immunization is one of the best ways to protect the health of their children? Our national failure falls into two categories. First, we've had inadequate ongoing, credible education of the public and health professions from trusted public-health officials concerning the known and unknown benefits and risks of vaccines. Today's parents have little fear of diseases they mistakenly think have been eliminated by vaccines. Second, there's been grossly insufficient investment in research on the safety of immunization. Together, these failures contributed to undermining of public confidence.
This is not the first time parental concern has threatened to deprive our children of the benefits of immunization. In the early 1980s, a spate of lawsuits threatened to drive vaccinemakers and doctors out of the immunization business. Then three highly polarized groups—parents who believed vaccines injured their children, vaccine companies and pediatricians—collaborated to create the National Childhood Vaccine Injury Act of 1986 (NCVIA). That law, a pragmatic, compromise solution, saved a then-fragile U.S. immunization effort. It offered financial relief to vaccine-injured children, prevented the demise of a vaccine industry that had dwindled rapidly from 26 to four companies and protected pediatricians whose careers then were being jeopardized by malpractice suits, even though they were properly administering vaccines. Over the past two decades, that law has distributed $1.8 billion with financial compensation to more than 2,200 families and individuals, encouraged dramatic expansion of the vaccine industry and allowed pediatricians to remain the mainstay of our successful immunization program.
The vaccine injury act put the secretary of the Department of Health and Human Services (HHS) in charge of planning and monitoring the effectiveness and safety of our national immunization program. There's plenty of financial incentive for industry, venture capitalists, government agencies, clinicians and the academic community to develop and distribute vaccines. In contrast, without federal government investment, no such incentives are available to support research on vaccine safety.
The responsibility for development, licensing, purchase, distribution and monitoring of vaccines is divided among a handful of federal agencies. Because of the wide range of scientific skills needed to study the safety of vaccines, we need a coordinated plan with funds to match. But no such plan has ever been put in effect. (In the last few months of his tenure under President George W. Bush, HHS Secretary Michael Leavitt did make some progress, but the effort was unfunded, incomplete and hampered by his short, lame-duck status.)
What remains to be done? The incoming secretary of HHS, with the backing of the White House, must carry out aggressively the duties assigned by the 1986 law: development and implementation of a national vaccine plan that includes adequate funds for communication and vaccine-safety research. Given the current distrust of government, development and accountability for the plan deserves serious, transparent input, not just by scientists but also by more than token participation of the public. It is that public whose trust has been eroded.
As parents, grandparents and health professionals, we know how immunization has revolutionized child health. But to maintain that progress, we must restore public trust in vaccinations. Ignoring public anxiety about childhood vaccines—and the increase in parents who skip or stretch out immunizations—risks even more serious outbreaks of vaccine-preventable diseases. We need visible leadership from the incoming secretary of HHS, supported by President Obama. The new public-health team must describe clearly the known benefits and risks of vaccines—and take into account safety issues as perceived by the public and scientific community. We know the new administration has a long list of problems to confront, but there are few issues more urgent than the health of our children. We hope they act quickly. | <urn:uuid:90a00068-3bb4-492d-afd0-a611a7560998> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.newsweek.com/how-restore-public-faith-vaccine-safety-82489",
"date": "2016-09-26T22:44:03",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.958460807800293,
"token_count": 1131,
"score": 2.859375,
"int_score": 3
} |
Penny Gordon-Larsen, Ph.D. for the CARDIA Obesity and Environment Investigators
There are major gaps in our understanding of the way shifts in the physical and social environment affect changes in dietary intake and physical activity patterns among any age group. This new study will focus on modifiable factors in the physical environment [i.e., community design features, recreation facilities (e.g., public, private), eating and shopping facilities, transportation options (e.g., public transportation), food prices, crime, and air pollution] that might contribute to the differential distribution of physical activity and dietary intake patterns. This research specifically addresses race/ethnic disparity in physical activity and dietary intake patterns that is related to disparities in environmental stressors.
The sample includes participants in the Coronary Artery Risk Development in Young Adults Study (CARDIA), a longitudinal study of the antecedents and risk factors for cardiovascular disease in an ethnicity-, age-, and sex-balanced cohort of 5,115 black and white young adults aged 18-30 years at baseline (1985-86). The Obesity and Environment study is an approved CARDIA Ancillary Study. The central task of this study is to link geographically, using Geographic Information Systems (GIS) technologies, time-varying respondent residential addresses from four CARDIA study years (1985-86, 1992-93, 1995-96, and 2000-01) with contemporaneous data on environmental factors derived from a series of federal and commercial data bases. This work will allow exploration of the density and proximity of individual CARDIA respondents to diet and activity-related facilities and resources, and the subsequent impact on physical activity, diet, and obesity patterns. The research team will use a system of innovative, analytical time-varying methods that allow sophisticated analysis to examine a rich set of hypotheses and issues related to environmental factors and their relationship to physical activity and diet behaviors over time.
Complex longitudinal and spatial analytical models will be used to explore relationships between environmental factors and physical activity, diet, and obesity. Physical activity and diet will be separately modeled as a function of covariates, some of which may be endogenous choices made by individuals. Race/ethnic differentials in these effects and the impact of shifts in the environment over time and through the lifecycle will be examined. The longitudinal analysis and the vast array of environmental measures used, coupled with the very high quality physical activity and dietary intake measures of CARDIA, provide the opportunity to capture the effects of the environment (and changes in location) on physical activity and dietary shifts. | <urn:uuid:101c6594-1766-44f2-9bce-3736a671286c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nhlbi.nih.gov/research/reports/2004-obesity/abstracts-gordon-larsen2",
"date": "2016-09-26T22:40:54",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9057118892669678,
"token_count": 519,
"score": 2.5625,
"int_score": 3
} |
It's difficult to be completely certain from your image, but I would say it's a biscuit beetle Stegobium paniceum. This species has the honour of being the the most common enquiry seen by the ID team in the Museum!
The larvae like dried vegetable products like flour and spices and they are often found in kitchens. However if you suspect they are coming out of the chimney the source is likely to be an old bird or wasp nest or bread dropped down the chimney by birds. The best way to get rid of them is to find and remove the food source. If they are in your kitchen cupboards (or anyone else's) give the cupboards a scrub, throw away any old or infested food and keep open packets inside sealed tins, jars or plastic tubs. | <urn:uuid:a719a0c9-493a-487b-9bef-c15c10e3272d> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nhm.ac.uk/natureplus/message/10401?fromGateway=true",
"date": "2016-09-26T23:11:16",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.973594605922699,
"token_count": 166,
"score": 2.65625,
"int_score": 3
} |
Jiroušková et al. Jana
Two Men at the Foot of Kilimanjaro. African collections of M. Lány and H. Fuchs.National Museum, Prague, 128 p.
One of the largest collections documenting the culture of native people from around Mount Kilimanjaro is to be found in the collections of the Náprstek Museum. It comes from the turn of the 19th and 20th centuries and, as to the size and importance, is comparable to Emil Holub’s collection, also housed in the Náprstek Museum.The donors of the collection were two men, who had linked their life with East Africa: Martin Bohdan Lány (1876 – 1941) and Hans August Fuchs (1875 – 1934).
The collection of M. B. Lány and H. A. Fuchs comprises 541 items, which the Náprstek Museum acquired between 1903 and 1910. The whole collection can be devided into six parts: 1) jewellery and ornaments, 2) weapons, 3) ritual objects, 4) objects for everyday use 5) dress and dress accessories, 6) musical instruments. | <urn:uuid:8798cabe-4f27-497c-a867-defc00464a98> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nm.cz/publikace/ne-publikace-detail-en.php?id=40",
"date": "2016-09-26T22:42:30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.890421450138092,
"token_count": 249,
"score": 2.546875,
"int_score": 3
} |
Organisms need a variety of nutrients for maintenance, growth, and reproduction. When one nutrient influences the ability of primary producers to access or use a second nutrient it is called co-limitation. There are a variety of mechanisms for co-limitation, and in Nahant, the availability of nitrogen has been found to limit the uptake of phosphorus in an important intertidal foundation species, the seaweed Fucus vesiculosus.
Val Perini, a graduate student in Matthew Bracken’s lab, spent two years studying the natural fluctuations in nitrogen to phosphorus ratios at Canoe Beach in Nahant. Then, based on observed patterns, she developed manipulative experiments to measure how nutrient levels in the water and in seaweed tissue impact the ability of F. vesiculosus to take up nitrogen and phosphorus.
The research, which appears in the journal Oecologia, suggests that F. vesiculosus is not able to take up phosphorus without an adequate supply of nitrogen. Therefore, due to seasonal changes in nitrogen levels in coastal waters, F. vesiculosus may be phosphorus limited during periods of low nitrogen availability, despite ample phosphorus levels in Nahant waters throughout the year. | <urn:uuid:deb6506a-2fa6-446d-af50-844a043e4498> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.northeastern.edu/cos/marinescience/2014/04/26/nitrogen-availability-limits-phosphorous-uptake-in-fucus-vesiculosus/",
"date": "2016-09-26T23:00:54",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9238908886909485,
"token_count": 254,
"score": 3.34375,
"int_score": 3
} |
SCR683 Adaptation Workshop
Lead Faculty: Ms. Bettina Moss
Course DescriptionBuilding on core screenwriting courses, this advanced workshop focuses on creating an outline for a feature-length screenplay based upon source material from another medium such as short stories, news articles and other sources. Students receive critical review of their outline and in a consultation with the instructor will create an action plan for writing the screenplay.
- Integrate learned elements of screenplay writing and apply them to an idea for adaptation.
- Incorporate constructive criticism in order to revise story ideas and outlines.
- Evaluate films and produced screenplays based upon adapted sources.
- Analyze constructs from films and produced screenplays which support the student's adapted idea.
- Implement adaptation screenwriting techniques to write an outline based upon source material from another medium. | <urn:uuid:6678dac0-9de4-4970-ae01-df7f018e5649> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.nu.edu/OurPrograms/School-of-Professional-Studies/Journalism-Film-and-Entertainment-Arts/Courses/SCR683.html",
"date": "2016-09-26T22:28:05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9259641170501709,
"token_count": 167,
"score": 3,
"int_score": 3
} |
Vibrational and Electronic Spectra
The Electronic Ground State
Molecules are almost always in their electronic ground state, that is, the electrons fill the orbitals of the molecule according to the Aufbau Principle, orbitals of lower energy are filled before orbitals of higher energy. If the orbitals are not filled in this manner, the molecule is in an electronic excited state. This is depicted below for the hydrogen atom (Figure 1).
A molecule in the electronic ground state can exist in a variety of vibrational and rotational states. The ground state with all of these vibrational and rotational states is represented by a Morse-like curve shown in Figure 2. The x-axis represents the so-called Q coordinate and the y-axis represents the energy of the molecule. For the hydrogen atom, Q can be thought of as the internuclear distance between the two hydrogen atoms. For very small values of Q, the energy is very high due to repulsion of the positively charged nuclei. Very large values of Q result in isolated, noninteracting atoms and the energy represents the energy of the isolated atoms. In between the extremes is the region of bonding interactions between the atoms.
Vibrational Excited States
As mentioned above, a molecule in the ground electronic state can exist in a variety of vibrational and rotational states. In this tutorial, only vibrational states will be covered. Each vibrational state exists at a definite energy, but over a range of Q values. Existing at a definite energy and definite Q-value would violate the Heisenburg Uncertainty Principle because both the location of the atoms and their energy could be known precisely at the same time. For this reason, the lowest vibrational state is not at the absolute bottom of the electronic "well" as shown in Figure 3. Transitions between vibrational levels can oocur upon absorption of a photon as shown in Figure 4. The photon can change the vibrational level (v) by 0, +1, or -1. You may be wondering what this has to do with the inorganic lab. The energy difference between the vibrational levels falls in the infrared region of the electromagnetic spectrum. The absorptions observed in the infrared spectrum are changes in these vibrational levels within one electronic state.
Sometimes these vibrational absorptions are very localized and can be associated with the stretching or bending of a specific bond. Most of the time,however, the individual bond stretches and bends are coupled. The observed IR absorptions are combinations of the bending and streching of several bonds. Symmetry and relative energies determine the combinations that are observed, but in a relatively complicated matter (Group Theory books cover this area). In general, the more symmetry equivalent bonds to vibrate there are, the fewer the absorptions that will be observed. There may still be more than one absorption observed for a group of symmetry equivalent bond vibrations.
Carbon Monoxide Ligands and Backbonding
One type of bond streching mode deserves special attention here, that of the carbon monoxide ligand (or carbonyl ligand as is is often called). Carbonyl ligand streching occurs in a well defined energy range 1700 cm-1 to 2200 cm-1. Carbonyl stretching modes are often coupled to other carbonyl streches, but not other types of stretches or bending modes. That is, the observed absorptions are fairly "pure" carbonyl stretching modes. As you might have noticed, these stretches occur over a fairly large energy range. This is because the observed stretching frequency is very dependent on the bonding between the ligand and the metal center.
The carbon monoxide ligand bonds to a metal by donating electron density (from it's nonbonding electron pair) into a metal d-orbital of sigma symmetry and accepting electron density from a filled metal d-orbital of pi symmetry into it's pi* antibonding orbital. The frontier orbitals of carbon monoxide are shown below in Figure 6 as pictoral depictions calculatedwith CAChe. As you can see, the HOMO of CO is primarily a lone pair orbital. It is also slightly antibonding between the carbon and oxygen atom. The LUMO on the other hand is a pi*
Figure 6. HOMO and LUMO of CO as calculated with CAChe. Gray atom is C, Red atom is O.
orbital (antibonding between the carbon and oxygen atoms. When the CO molecule bonds to a metal, these frontier orbitals interact as shown in Figure 7. The donation of electron density from the CO to the metal results in a slight strengthening of the CO bond (electron density is being removed from a slightly antibonding orbital). Likewise the electron density being accepted into the pi* orbital results in a dramatic weakening of the CO bond. The frequency of a bond vibration in a diatomic molecule is determined by Hooke's Law shown in equation 1. The frequency is proportional to the square root of the force constant of the bond.
Figure 7. Interaction of the CO HOMO (a) and the CO LUMO (b) with a metal atom (silver atom,Mo).
Therefore, the stronger the bond, the higher the energy of the stretching frequency for that bond (for the same molecule). The importance of this, is that the amount of pi donation to the carbonyl ligand is usually dependent on the amount of electron density at the metal center. Therefore, the carbonyl stretching frequencies can be used as an indicator of the electron density of the metal in a closely related set of molecules.
Electronic Excited States
Transitions from one electronic state to another can also occur upon absorption of a photon. The energy range for these transitions falls in the ultraviolet and visible range of the electromagnetic spectrum for the most part. A transition from one electronic state to another is shown in Figure 8. Much information can be drawn from such a figure.
Figure 8. A transition from the ground electronic state to an excited electronic state
Organic Molecules and Selection Rules
You have probably had some background in UV-Vis spectroscopy in your organic class. The types of transitions important to organic molecules are pi-pi*, sigma-sigma* and n-pi*. These transitions are fairly intense because of the selection rules. For molecules with an inversion center, the symmetry selection rule is that the orbital the electron is promoted from and the orbital the electron is promoted to cannot have the same symmetry with respect to the inverision center. To begin with, a molecule has an inversion center if every point in the molecule (x,y,z) can be interchanged with every point (-x, -y, -z) when the center of the molecule is at point (0,0,0) without any noticable change to the molecule. If we take ethylene as an example (Figure 9), the pi orbital has ungerade (u) symmetry with respect to the inversion center. This is because if we transfer every point (x,y,z) to the point (-x,-y,-z), the resulting orbital has the positive lobes replaced by the negative lobes and vice-versa. Similarly, the pi* orbital has gerade (g) symmetry because a n inversion operation on this orbital results in no change. Since, the pi orbital is u and the pi* orbital is g, the pi-pi* transition is symmetry allowed. In the same way, a transition from the C-C sigma bond (g symmetry) to the C-C sigma* orbital (u symmetry) is allowed, but the transition from sigma (g symmetry) to pi* (g symmetry) is symmetry forbidden.
Figure 9. Examples of ungerade and gerade orbitals of ethylene
Another way of looking at these transitions is with molecular orbital diagrams. In figure 10, part of the molecular orbital diagram of ethylene is shown (the C-C sigma and pi bond portions). The same transition that can be shown with the energy well diagram can be shown as excitation of an electron from one orbital to another.
When there is a metal present in the molecule, several other types of transitions are possible. This is because of the more complex orbital schemes in metal complexes. This first (and most intense) type of metal-based transitions are the charge transfer transitions. In the example of ethylene above, the initial and final orbitals involved in the transition were centered on the same atoms. In a charge transfer transition, they are not (hence charge transfer transitions, the transfer of an electron from one part of the molecule to another). The two types of charge transfer transitions we will discuss are metal to ligand charge transfer transitions (MLCT) and ligand to metal charge transfer transitions (LMCT). In a typical MLCT transition, an electron from one of the metal orbitals is transferred to a pi* orbital of one of the ligands on the metal. If the metal has unoccupied d orbitals, a transfer from a ligand orbital to the metal is also possible (LMCT).
The second type of transition we will discuss are d-d transitions. This is the excitation of an electron from one metal d-orbital to another metal d-orbital. You might have noticed a problem with this type of transition. All of the metal d-orbitals are of g symmetry. That makes a d-d transiton symmetry forbidden. This brings us to vibronic coupling. Certain vibrations can remove the center of symmetry from the molecule. This makes the d-d transitions weakly allowed and in some cases, observable.
The d-d transitions from the previous section are formally symmetry-forbidden. This forbidden character manifests itself in the intensity of the observed d-d transitions. Most pi-pi* and sigma-sigma* transitons have intensities in the several hundred to a few thousand L/mole cm. Most d-d transitions have intensities of less than 100 L/mole cm. Charge transfer bands can have very large intensities (in the 10's of thousands).
|Type of Transition||Approximate Intensity (L/mole cm)|
|charge transfer||1000-10000 's|
For electronic transitions, the information that needs to be reported is the solvent (the transitions are solvent dependent), the wavelength of the maximum absorbtion (epsilon max, usually in nanometers) and the extinction coefficient. The extinction coefficient is not always reported for known compounds. For vibrational transitions, the solvent ,(or matrix) the minimum transmittance of the absorbtion (usually in wavenumbers), and a relative measure of the intensity of the band (vw for very weak, w for weak, m for medium, s for strong, vs for very strong, b for broad). The measures of intensity are guesses and are relative to other peaks of the compound in the spectrum.
Return to the main tutorials page | <urn:uuid:754e4523-68bc-467a-9cee-bfbf92b22f02> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.okbu.net/chemistry/mrjordan/inorganic1/electronic/ELECTRON.HTML",
"date": "2016-09-26T22:22:25",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9050663709640503,
"token_count": 2265,
"score": 3.53125,
"int_score": 4
} |
Digging out facts regarding the town’s history is quite laborious due to the inadequacy, if not at all the absence of authoritative records. Compounding the problem even more was the unfortunate burning of the municipal in 1951 by dissidents. The building became the last frontier of historical records because of the absence of a public library. Interviews of the town’s old folks on the basis of oral accounts were made and collated with a work on a similar objective by the scholarly priest Fr. Jose R. De La Cruz.
The collated account pointed out the Sta. Rita had its humble beginnings. It started as a settlement long, long time ago at barrio Gasak, now San Isidro. There it expanded to a wide territory embracing today’s barrio San Vicente, San Matias Santa Monica, San Agustin and San Juan. The settlement, which grew bigger eventually, was then a part of Porac. Politically and religiously, Porac managed the affairs of the town.
The distance between the emerging town of Sta. Rita and the mother town Porac proved catastrophic in terms of peace and order. The Aetas of the Porac mountains would often down to the lowland bringing havoc and fear among the people, resorting to banditry even raping innocent women. It was 1724 that Sta. Rita was carved out of Porac, although not a separate parish.
The parish priest of Porac continued to manage and administer Sta. Rita religiously. The time came when in 1771, Sta. Rita as a parish assumed independence from the parish of Porac. A certain Rev. Fr. Eustaqio Polina engineered the eventful moment the Ritenians would thank after years thereafter.
As an independent parish, Sta. Rita now would need a church of its own, if only to continue the road and the crusade to evangelization. The search was over in 1839 when, likened to rain in the dessert sand, Fr. Francisco Rayo, then the parish priest of Sta. Rita began the Herculean task of building the present parish church. Thanks to the “polo” or forced labor that was made legal. It undoubtedly contributed much in the completion of the church work.
The Hispanization of Bacolor and the eventual transfer of the capital of the Spanish Philippines by Governor General Simon de Anda brought repercussions to Sta. Rita. It was due to this that the town was associated by many inhabitants to Bacolor, calling it “Sta. Rita de Lele” or “Sta. Rita Baculud”.
The end of the Spanish occupation ushered in the Philippine military or civil occupation. The onset of the Taft administration (Governor William Howard Taft) and the governor Joven in Pampanga caused the Sta. Rita town to be merged with Bacolor. The setback was however temporary and short- lived for the town’s energetic son Don Basilio Ocampo, and Don Magno Gosioco, then the incumbent mayor succeeded in their crusade to separate Sta. Rita from Bacolor.
The outbreak of World War II sent jolts to the country. The treacherous attack of Pearl Harbor by Japanese zero fighters on December 8, 1941 struck horror. On the same noon, Clark Field was bombed. Tat was the first time the Ritenian marketers went home from “Wawa” (Guagua) bringing not the good tidings of Guagua Town Fiesta but news of fear from falling bombs and strafing bullets. It was the cheerless Christmas of 1941.
The post – liberation era was a period of harsh and rigid discipline, of witch – hunting and vendetta against the “makapili”. It was an era of emasculation and strict adherence to discipline. The legalized “polo” or forced labor slowly gained favors of reconstruction of the public plaza with the pool of compulsory contributions in labor, money or kind.
The onset of the 50’s produced a disciplined ‘warden” in – charge of the fragile peace and order situation of the town “Mang Dado Dizon”. His brand of discipline sent fears to town folks and harnessed them to submission. This was also the year when a dissident leader named “Pampanga” was captured and incarcerated in the municipal jail. The prisoner, however, demonstrated civic – mindedness when he planted shady fruit-bearing trees around the plaza.
Not long thereafter, in 1951, dissidents raided and burned the Municipal building, reducing to ashes everything and every document housed in it.
The election into office by the longest reigning mayor of Sta. Rita Mr. German Galang brought peace. The mayor had no college diploma to boast, but rode on an underdog and low- profile public relations well- loved by his constituents, the reason why he stayed in power for a little for two decades.
In matters of religious life, the Ritenians were so much influenced by their Parish Priest- “Padre Ambo or Rev. Fr. Camilo, a pious priest who demonstrated a Christ- like priestly career founded on poverty, mortifications and celibacy. He died poor in belongings but rich in spiritual; gems. His demise was felt in every nook of Ritenian life. Even the tolling of the bells brought sorrow music and melody.
Another priest who had left a lasting legacy of spiritual influence was Rev. Fr. Fidel Dabu. His trademarks… the Angelus at twilight, the Rosary at the eight of the evening and the… Panalangin king Abac, the last one… a routine that embedded itself so deep in the hearts of Ritenian religiosity, and the clock of the farmers at dawn… “Abac na, mangadi ne y Apung Dabu.”
It took a very scholarly priest Rev. Fr. Alfredo Lorenzo to insert and educate the Ritenians about “lahar”. That was 1991 and the parishioners knew nothing about it. They thought the cessation of Mount Pinatubo eruption would just be the ordinary end, finished with pyroclastic materials, ash etc. carpeting the mountains of Zambales. But, though… it was just a beginning of a nightmare. The ensuing years were horror of unimaginable proportion. Sapang Baluyut was choked to the neck by steaming river of mud and debris… Mitla was buried to the roofs, Balas, Bacolor was 15 to 20 feet belowthe rampaging mudflow… and Sta. Rata lay prostate with her fallen barangays… San Juan, Gasak and San Jose. Unthinkable… the future seemed to be doomed.
The heroes of the FVR Megadike surfaced out driving away the agent of death. The Ritenians followed the cue and the town, as if by miracle, was excluded from the catch basin… delivered finally of the of the catastrophe. The hand of the almighty and the intercession of the patroness “Apung Dita” were felt, though invisible. Misa sa Control was spearheaded and sustained by Rev. Fr. “Among Jess Mariano even up to this day. The town continues to live and remains steadfast. The future with its uncertainness on the roll… and the Ritenian prays, hoping to brave the tempest and billows of divineness… | <urn:uuid:31b7fb24-086b-4037-aa5b-7f7e848c354e> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.oocities.org/staritapampanga/hsrp.html",
"date": "2016-09-26T22:26:59",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.963362455368042,
"token_count": 1542,
"score": 2.796875,
"int_score": 3
} |
Use of air dispersion modeling to estimate the time potentially available for emergency response action needed to protect public safety from chemical releases [electronic resource] /
Abstract (Summary)The Release Incorporating Terrain Effects (RITE) Emergency Response Software model was used to determine the amount of time potentially available for emergency response personnel to notify the public and convey instructions on the proper actions that should be taken in the event of a chemical release. The release that was modeled involved chemicals found on occasion in the major rail yard in Cincinnati, Ohio adjacent to Interstate 75. Three chemicals, hydrogen cyanide, chlorine, and ammonia, were used to simulate an accidental release. Meteorological conditions were input to the model to represent a variety of scenarios with each of the three chemicals. The plume travel distance was predicted for each of the chemicals at three concentrations related to occupational exposure, Permissible Exposure Limit (PEL), Threshold Limit Value (TLV), and Immediately Dangerous to Life or Health (IDLH). The distance the plume traveled, used in conjunction with the time frame in which it moved, was used to determine the amount of time available to notify the public. Wind speed did effect the dispersion of the chemicals. Wind speeds below 18 mph, which represents the 95th percentile 24-hour average wind speed, result in the plume covering a greater area, thus exposing larger numbers of people to possibly hazardous conditions. Higher wind speeds, those above 18mph, tend to limit the development and area of the plume at the specified points and result in the plume dissipating at a faster rate. This is due to the increased mixing of the air and the dilution of the chemical at a much higher rate.
School:University of Cincinnati
School Location:USA - Ohio
Source Type:Master's Thesis
Keywords:university of cincinnati
Date of Publication: | <urn:uuid:af366e8e-6ccb-42dd-83db-284f4dd8a00d> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.openthesis.org/documents/Use-air-dispersion-modeling-to-401865.html",
"date": "2016-09-26T22:28:52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.926081657409668,
"token_count": 380,
"score": 2.984375,
"int_score": 3
} |
The QNX Neutrino RTOS (from now on referred to as simply "QNX") was designed to facilitate in the development of applications for the embedded market; developers can write and build applications on their 'normal' machines using the same microkernel that the developers will use for their devices.
Merits Of The Neutrino
QNX is an operating system based on a true microkernel design. Microkernels are the opposite of monolithic kernels; instead of having all functions, drivers, filesystems and so forth inside the kernelspace, microkernels keep them outside of kernelspace, in userspace. The advantages a microkernel has to offer over a monolithic kernel are therefore quite obvious; one can replace and restart drivers, filesystems and so forth on the fly, without having to shutdown the entire system. Also, when part of the operating system crashes, it does not bring down the entire system, because the crashed part is outside of kernelspace. Theoretically, QNX could run without any downtime for ages and stay up-to-date at the same time. You can imagine that such a fault-tolerant, crash-insensitive design is very important in mission critical situations; and this is indeed the case. QNX powers all sorts of devices; ranging from hospital equipment (such as MRI scanners) to parts of the International Space Station, to in-car multimedia devices. Now that's scalability! Another major advantage of a microkernel design is simplicity; since the operating system is 'chopped' into smaller parts running in userspace, it is supposed to be easier to maintain and easier to program. In the early days of the microkernel, people also expected it to be faster and more efficient than a monolithic kernel.
Do all these advantages sound too good to be true? Well, simply put: yes, they indeed sound too good to be true. If not, all operating systems would be using microkernels today. I can explain the main drawback of the microkernel design with a very simple illustration (with thanks to CTO):
"Thinking that microkernels may enhance computational performance can stem but from a typical myopic analysis: indeed, at every place where functionality is implemented, things look locally simpler and more efficient. Now, if you look at the whole picture, and sum the local effects of microkernel design all over the place, it is obvious that the global effect is complexity and bloat in as much as the design was followed, i.e. at every server barrier. For an analogy, take a big heavy beef, chop it into small morsels, wrap those morsels within hygienic plastic bags, and link those bags with strings; whereas each morsel is much smaller than the original beef, the end-result will be heavier than the beef by the weight of the plastic and string, in a ratio inversely proportional to the small size of chops (i.e. the more someone boasts about the local simplicity achieved by his microkernel, the more global complexity he has actually added with regard to similar design without microkernel)." (Source)
Now, this clearly explains the drawback of the microkernel design: a microkernel design inevitably gets heavy and bloated (contrary to what the name 'microkernel' implies), thus reducing its speed-- simply not acceptable for everyday users. QNX, on the other hand, feels everything but bloated and sluggish, and that is probably also why it is one of the current market leaders in the embedded world. Now, why does QNX' microkernel (named 'Neutrino') seem to perform better than other microkernels? This is where it gets really technical, and I can not give a crisp explanation, since I am not qualified. My technical knowledge is simply too limited. A post by Bernd Paysan, though, in alt.os.multics explained why QNX performs better than other microkernels:
"[...] Unix's syscalls all are synchronous. That makes them a bad target for a microkernel, and the primary reason why Mach and Minix are so bad - they want to emulate Unix on top of a microkernel. Don't do that.
If you want to make a good microkernel, choose a different syscall paradigm. Syscalls of a message based system must be asynchronous (e.g. asynchronous IO), and event-driven (you get events as answers to various questions, the events are in the order of completion, not in the order of requests). You can map Unix calls on top of this on the user side, but it won't necessarily perform well."
QNX' applications do exactly what Bernd Paysan concludes-- "[...] if you design the application to use [QNX'] message passing APIs, it will work well." (I got this information from Christopher Browne's Web Pages, under "3.2. The Fundamental Challenge of Microkernels")
There is a lot more info on microkernels to be found on the internet. If you enjoy discussing matters such as proprietary vs. open-source, PPC vs. x86, GUI vs. CLI and so on, you will certainly find the microkernel vs. monolithic kernel discussions extremely interesting. A must-read on this subject is the classic discussion between Andy Tanenbaum (creator of Minix) and Linus Torvalds (creator of the Linux kernel). With Andy Tanenbaum being a proponent of the microkernel, and Linus Torvalds being a supporter of a monolithic design, all the ingredients were there for a very interesting 'flamewar' (according to 1992 standards this was very much a flamewar). An extract of this discussion, held in comp.os.minix in 1992, can be found here. Even if you are not interested in microkernels, it is still a good read. Especially the attitude towards the future of the Intel platform would prove to be... Sort of wrong.
So far for a short description on the merits of the microkernel, with the Neutrino kernel in particular. I strongly believe that a lot of people have a lot more interesting things to say about this subject than I do, seeing my limited knowledge on kernel design. Therefore, please feel free to correct me.
I now wish to move on to the actual goal of this article: how does QNX perform as a desktop operating system? | <urn:uuid:8f9220a7-f0c0-428d-867b-7d469a3a9278> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.osnews.com/story/8911/QNX_The_Unexpected_Surprise/page1/",
"date": "2016-09-26T23:24:57",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9381878972053528,
"token_count": 1334,
"score": 2.5625,
"int_score": 3
} |
This wiki page shows pictures of geophytes growing in the wild in northern California along the Sonoma Mendocino coast arranged alphabetically from Marah through S. Rainfall in this location starts in the fall with the most rain coming in December and January with less rain continuing sometimes as late as May. Summers are dry although there are periods of fog in summer which brings some moisture. Temperatures are moderate year round. Habitats are mixed evergreen and Redwood forests, bluff scrub, riparian and some limited grasslands, but much of this latter habitat (grasslands) is now gone. Most flowers bloom late spring into summer.
Marah fabaceus grows along steams and embankments and in shrubby and open areas. It has yellowish green or cream-colored flowers or occasionally white (those found inland) rotate (spreading, with a short or non existent tube) flowers. The fruit is globe shaped with a spiny surface. It is found in California in the Sierra Nevada and the coast ranges. I believe these pictures taken on the bluff at Salt Point State Park are this species although Marah oreganus also grows in the park. Photos by Mary Sue Ittner.
Marah oreganus grows on slopes, in canyons and hilly areas and the edge of forests from San Francisco Bay area, California, north to British Columbia. Flowers are white, small, and bell like. The fruit is tapered to a beak, often striped dark green with prickles sparse to dense. Photos taken by Bob Rutemoeller and Mary Sue Ittner at Manchester State Beach and another sandy areas near the ocean.
Narcissus spp. Various Narcissus species or cultivars can be found along the coast as garden escapees since they are not native. Photos below were taken by Bob Rutemoeller at Salt Point State Park in January 2009. Plants look like Narcissus tazetta, but they could be a cultivar.
Nuphar polysepala is native to western North America where it grows in ponds, lakes, and sluggish streams. It flowers late spring to summer. It has shiny large leaves lying flat on the water and big waxy yellow flowers. The photos below were taken by Bob Rutemoeller in Sonoma County where a pond was completely covered with this plant. The leaves were much more upright than usual so from a distance we did not recognize it, but a closer look at the striking flower made the identification secure.
Oxalis oregana is a plant with green trifoliate leaves and purple flowers growing on horizontal rootstocks. It is a ground cover found in coastal forests from California to Washington. In shady Redwood forests it is one of the few plants that competes well and you can often see great carpets of it there. The first one was photographed in Kruse Rhododendron State Park in California by Bob Rutemoeller and the second picture from Mary Sue Ittner shows the carpet of leaves you often see.
Oxalis pes-caprae is a terrible escaped exotic that is native to South Africa. It is found in many areas along the coast. These pictures were taken in Mendocino County close to Highway One. Photos by Mary Sue Ittner.
Piperia elegans ssp. elegans is generally found in dry, open sites, scrub, conifer forest below 500 m from California north to British Columbia flowering May through September. Photos taken by Bob Rutemoeller different years in July and September.
Piperia transversa or the royal rein orchid is found from California north to British Columbia. It is usually found in dry sites, scrub, oak woodland, mixed-evergreen or conifer forest. The basal leaves are usually withered by the time it blooms, late May to August. Photos taken by Mary Sue Ittner early July 2016.
Prosartes smithii , syn. Disporum smithii or Fairy Bells is found in moist shady forests near the coast. It has creamy white bells that hang under the leaves and therefore are not easy to see followed by the berries that eventually turns from green to large orange red. The first two photos by Bob Rutemoeller and the next two from Mary Sue Ittner.
Romulea rosea is another exotic from South Africa that in 2010 was noted by the California Weed Council as a species to be concerned about after it was discovered in large numbers on the Jenner Headlands in Sonoma County. When hiking there, we saw a huge number in bloom and in seed. Most were growing in grassland that already is composed of a majority of exotic species (including the non-native grasses). Photo by Mary Sue Ittner.
Scoliopus bigelovii is a northern coastal California species that is found in very wet habitats: mossy streambanks and moist, shady forests. This plant is called slink pod or fetid adder’s tongue by locals. The first one photographed by Bob Rutemoeller was blooming in May 2003 in Sonoma County, California. The next two photos were taken by Mary Sue Ittner of new leaves and a flower in February 2016 and later in another year of the spotted leaves after flowering has finished.
Sisyrinchium bellum , known as Blue-eyed Grass, is found in open grassy places in the Pacific States. The first photograph was taken by Bob Rutemoeller in Sonoma County, California in May 2003. A later picture of his is a close-up. The last photo from Mary Sue Ittner was taken in Mendocino County near Navarro Point June 2007.
Sisyrinchium californicum , known as yellow-eyed or golden-eyed grass, is found near the coast in wet places from British Columbia to central California. Photos from Salt Point State Park, Sonoma County, California where it is found growing only in marshy places or areas that get extra water such as road verges. Photos by Mary Sue Ittner and Bob Rutemoeller.
Smilacina racemosa see Maianthemum racemosum
Smilacina stellata see Maianthemum stellatum
Spiranthes romanzoffiana known by the common name of hooded ladies tresses is found in various habitats in North America (forests, riparian wetland), but also coastal bluffs and dunes. Photos taken by Mary Sue Ittner and Bob Rutemoeller at Manchester State Beach and Salt Point State Park in summer. | <urn:uuid:d860d18a-9cf5-42ed-8b2d-efb589fce9a9> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.pacificbulbsociety.org/pbswiki/index.php/MendocinoSonomaCoastFive",
"date": "2016-09-26T22:26:43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9651528000831604,
"token_count": 1351,
"score": 3.078125,
"int_score": 3
} |
It has been said vultures are cleansers of the environment, or that they are symbols of motherhood for their ability to make life from death. But I say that Turkey Vultures are, above all else, loyal. They are the truest friend, at least to other vultures, because being a Turkey Vulture requires cooperation, and that starts with the vulture syrinx—or lack of one.
It was Mildred Miskimen of Miami University who first dissected the upper throat of a Turkey Vulture in 1957 and found: “The Turkey Vulture shows no syrinx. The trachea branches into two bronchi with no syringeal drum, no pessulus, and no membrane between cartilages or at the apex of the bronchi.” The short account here is that birds need a syrinx—the avian voice-box—to sing, call, cluck, tweet, hoot, trumpet, or warble. To communicate.
And Turkey Vultures don’t have one.
So what do Turkey Vultures say, sans syrinx? Most raptor rehabbers note that Turkey Vultures “grunt and hiss”—quiet sounds that work in close proximity. But vultures have a trick to communicate long-distance messages—like “I may have a spot of food here”—without making a sound.
Turkey Vultures are one of a few bird species that are proven smellers, able to perceive minute stenches from decaying meat. Back in the 1960s the odorant ethyl mercaptan was placed into southern California oil pipelines to learn where the leakages were. Biologist Kenneth Stager was documenting Turkey Vulture smell perception, and realized the rotten smell from even the smallest leak had an avian indicator. Simply follow the Turkey Vultures. Where they flock, there’s your leak.
This brings us right back to cooperative vulture food-finding. Turkey Vultures mostly hunt in loose, widely-spaced flocks. When one vulture catches a whiff of rot, it may then make dozens of repeat flights over the stinky area. These flights are ritualized—long, shallow, parabolic dives that seem to be the vulture version of yelling “Hey! We have food here! Come join!” And indeed, other vultures come closer to check things out.
I have seen this many times, this parabolic-flying Turkey Vulture town crier routine, and I used to wonder: why share? One answer is that they may need help from others to tear a thick mammal hide apart. Or maybe they provide safety in numbers; it can take a long series of flaps for a well-fed vulture to get off the ground, time enough for a coyote or puma to do its job.
But I prefer to attribute this to loyalty, to vulture friendship. I think it’s time for our culture to choose to see vultures in a more positive light, in contrast to the pop-culture conception of them as the deathly specters of Disney cartoons.
Lacking a syrinx, Turkey Vultures merely use what they have—a dramatic, trim-winged, parabolic glide—repeated over and over again to snare the attention of hungry colleagues, to relish the smell, to share the bounty.
By Allen Fish
Director, Golden Gate Raptor Observatory
Photos by Don Moseman
Return to Park E-ventures archive or read more articles about related: | <urn:uuid:4aff0ed1-841e-426a-8fd7-b15cb7bde93a> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.parksconservancy.org/about/newsletters/park-e-ventures/2012/12-ggro.html",
"date": "2016-09-26T22:33:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9310401678085327,
"token_count": 747,
"score": 3.0625,
"int_score": 3
} |
THE HOLY FAMILY
Feast: The Sunday after Christmas
Scripture tells us practically nothing about the first years and the boyhood of the Child Jesus. All we know are the facts of the sojourn in Egypt, the return to Nazareth, and the incidents that occurred when the twelve-year-old boy accompanied his parents to Jerusalem. In her liturgy the Church hurries over this period of Christ's life with equal brevity. The general breakdown of the family, however, at the end of the past century and at the beginning of our own, prompted the popes, especially the far-sighted Leo XIII, to promote the observance of this feast with the hope that it might instill into Christian families something of the faithful love and the devoted attachment that characterize the family of Nazareth. The primary purpose of the Church in instituting and promoting this feast is to present the Holy Family as the model and exemplar of all Christian families.
— Excerpted from With Christ Through the Year, Rev. Bernard Strasser, O.S.B.
The Holy Family
Marriage is too often conceived as the sacrament which unites a man and a woman to form a couple. In reality, marriage establishes a family, and its purpose is to increase the number of the elect, through the bodily and spiritual fecundity of the Christian spouses.
1. Every marriage intends children. Although Mary and Joseph were not united in a carnal way, their marriage is a true marriage: an indissoluble, exclusive union, wholly subordinated to the child. Mary and Joseph are united only in order to bring Jesus into the world, to protect and raise him. They have only one child, but he contains the whole of mankind, even as Isaac, an only child, fulfilled the promise made to Abraham of a countless progeny.
2. The purpose of every marriage is to establish a Christian family. The Holy Family observed the religious laws of Israel; it went in pilgrimage to Jerusalem every year with other Jewish families (Lk. 2:41). Jesus saddens and amazes his father and his mother because to their will and company he prefers "to be in his Father's house". Thus it may happen that God's will obliges the family to make disconcerting sacrifices. Yet every Christian family must live in harmony and in prayer, which are the pledges of joy and union.
3. "He remained obedient to them." Jesus was God. And through the fullness of grace Mary stood above Joseph. Nevertheless — if we except the event in the Temple — Joseph remained the head of the family; he took the initiative (as when the Holy Family fled to Egypt), and in Nazareth Jesus obeyed his parents.
--Excerpted from Bread and the Word, A.M. Roguet
Vocation of the Family Is to Support Each Other on the Road to Heaven
In the Gospel we do not find speeches on the family but an event that is worth more than any word: God willed to be born and to grow up in a human family. In this way, He has consecrated the family as the first and ordinary way of His encounter with humanity.
During His life in Nazareth, Jesus honored the Virgin Mary and righteous Joseph, being subject to their authority during the whole time of His infancy and adolescence (Luke 2:51-52). In this way, He made evident the primary value of the family in the education of a person. Jesus was introduced to the religious community by Mary and Joseph, frequenting the synagogue of Nazareth.
With them He learned how to make the pilgrimage to Jerusalem, as narrated in the Gospel passage that the liturgy of the day proposes for our meditation. When He was 12 years old, He stayed behind in the temple, and His parents took three days to find Him. With that gesture, He led them to understand that He had to "attend to His Father's business," that is, to the mission that God had entrusted to Him (Luke 2:41-52).
This Gospel episode reveals the most authentic and profound vocation of the family: that of supporting each one of its members on the path of discovery of God and of the plan he has ordained for them. Mary and Joseph educated Jesus above all by their example: From his parents, he learned all the beauty of the faith, of the love of God and of his law, as well as the exigencies of justice, which finds its fulfillment in love (Romans 13:10).
From them He learned first of all that one must do God's will, and that the spiritual bond is worth more than that of blood. The Holy Family is truly the "prototype" of every Christian family that, united in the sacrament of marriage and nourished by the Word and the Eucharist, is called to carry out the marvelous vocation and mission of being a living cell not only of society but of the Church, sign and instrument of unity for the whole human race.
Let us now invoke together the protection of Mary Most Holy and of St. Joseph for every family, especially for those in difficulty. May they be supported so that they will be able to resist the disintegrating impulses of a certain contemporary culture which undermines the very basis of the family institution. May they may help Christian families throughout the world to be the living image of the love of God.
--Benedict XVI, Feast of the Holy Family 2006
Things to Do:
Let us imitate the Holy Family in our Christian families, and our family will be a cell and a prefiguration of the heavenly family. Say a prayer dedicating your family to the Holy Family. Also pray for all families and for our country to uphold the sanctity of the marriage bond which is under attack.
Read the explanation of Jesus' knowledge in the activities section. Read Pope Pius X's Syllabus of Errors which condemns the modernist assertion that Christ did not always possess the consciousness of His Messianic dignity.
Have the whole family participate in cooking dinner. You might try a Lebanese meal. Some suggestions: stuffed grape leaves, stuffed cabbage rolls, lentils and rice, spinach and meat pies, chicken and dumplings, hummus, Lebanese bread, tabbouleh — a Lebanese salad and kibbi, a traditional Lebanese dish of specially ground meat mixed with spices and cracked wheat. This is the same kind of food that Mary served Jesus and St. Joseph. It's healthy and delicious.
Pray the.........Liturgy of the Hours | <urn:uuid:d1b83761-aa47-4d68-8286-5dc2e54f599a> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.passionistnuns.org/Saints/HolyFamily/index.htm",
"date": "2016-09-26T22:23:28",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9661430716514587,
"token_count": 1343,
"score": 2.703125,
"int_score": 3
} |
Ron Guth: The 1863 Three-Cent Silver is a very scarce date with a mintage of only 21,000 circulation strikes. The surviving population is quite low in all grades, and most of the certified examples are in Mint State. This indicates that collectors set aside some high grade examples but did not pursue the circulated grades. The rarity of this date in Mint State is similar to that of the 1865, 1866, and 1870.
Typically, this date displays weakness on the topmost tip of the star and on the ribbon end on the opposite side of the coin. Clashmarks are not as prevalent as on other dates, but there is often a ghosting effect of details from the other side. High-end examples are very rare, with a gap in MS67 just before a single MS68 and one, monstrous PCGS MS68+ (illustrated above). | <urn:uuid:9a4bd158-05ce-435e-8575-d4bcff021853> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.pcgs.com/SetRegistry/CoinFacts.aspx?i=175709",
"date": "2016-09-26T22:39:06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9409675002098083,
"token_count": 176,
"score": 2.546875,
"int_score": 3
} |
Nonfiction master Russell Freedman illuminates for young readers the complex and rarely discussed subject of World War I. The tangled relationships and alliances of many nations, the introduction of modern weaponry, and top-level military decisions that resulted in thousands upon thousands of casualties all contributed to the “great war,” which people hoped and believed would be the only conflict of its kind. In this clear and authoritative account, the Newbery Medal-winning author shows the ways in which the seeds of a second world war were sown in the first.
Get the news you want from Penguin Random House
About Russell Freedman
Russell Freedman is the author of over thirty-five nonfiction books. His works have received many awards, among them the Robert F. Silbert Award, a Newberry Medal, and a Newberry Honor. He was recently awarded the May Hill Arbuthnot Honor Lecture… More about Russell Freedman
Published by Listening Library (Audio) Jul 27, 2010| 209 Minutes| Young Adult| ISBN 9780307738530 | <urn:uuid:9c73756a-bdf4-45b5-9cf8-aeca215512bd> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.penguinrandomhouse.com/books/203771/the-war-to-end-all-wars-by-russell-freedman/9780307738530",
"date": "2016-09-26T22:48:39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9346259832382202,
"token_count": 216,
"score": 3.046875,
"int_score": 3
} |
People belonging to ‘Generation Y’ believe they have higher morals and better communication skills than all preceding generations, but are less efficient and less adaptable despite a fast-changing workplace, research has found.
A study of 4,000 members of the Baby-boomer, Generation-X and Generation-Y age-groups by people assessment firm Talent Q found that people in their early 20s claimed they had higher ethical standards than older generations, as well as greater attention to detail and better social skills.
However, researchers also found they were generally less organised and less efficient, something Talent Q’s chairman Roger Holdsworth said was concerning.
“The days where a person has a job for life are long gone, so it’s perverse that the Generation Y psyche appears to show less adaptability, efficiency and dynamism than older generations,” said Holdsworth.
“The 20-somethings we studied were also less resilient, less confident at negotiation and decision-making, less influential in a leadership capacity and less able or willing to follow the rules – all of which is concerning for the future.
“But there were positives too. In stark contrast to popular perceptions of surly, selfish and aggressive youth, the younger generation claims to have a stronger ethical code, is more socially aware and more in tune with others’ behaviour than its elders.”
The research also found the Baby-boomers were the most likely to adopt new techniques and most likely to favour radical ideas, but were also less ambitious and less socially confident.
“Perhaps because of growing up in the 1960s, radicalism still shapes the Baby-boomer psyche,” said Holdsworth.
“They remain more adaptable to change than younger people – very much confounding the view that you can’t teach an old dog new tricks.” | <urn:uuid:7886b355-49b4-48c5-92f2-93ac67e5038d> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.personneltoday.com/hr/generation-y-kids-claim-higher-morals-than-their-elders/",
"date": "2016-09-26T22:24:56",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9752805233001709,
"token_count": 387,
"score": 2.734375,
"int_score": 3
} |
David Biespiel Poet and writer, Attic Writers Workshop :
Just two brief notes about the meaning of Senator Kennedy's death. Again, I confess, I have no television. But I have viewed snippets of Kennedy memorial events online.
I suspect that much has been said about what a great senator Ted Kennedy was after he freed himself of his presidential ambition. That's true, but it is an unfair, reductive way to look at his career because the loss in 1980 didn't just free Ted Kennedy from running for president, it freed others from urging and longing for Ted Kennedy to become president. In other words, Ted Kennedy arrived in the Senate in January 1963 to be a senator--not to run for president. His brother already had that job in the Kennedy family. It always seemed that Ted Kennedy was imprisoned to the ambition of others regarding the presidency.
Prior to 1980's campaign loss, moreover, Ted Kennedy was a powerfully effective legislator and a great senator already--not living off the legacy of his brothers' martyrdom, but in effect creating, enacting, and legislating that legacy and our sense of that legacy. In other words, to tinker with a well-known expression: Jack Kennedy was no Ted Kennedy.
Ted Kennedy's legislative achievements prior to that 1980 campaign as majority whip and as the author or co-sponsor of hundreds of successful bills included important legislation in the areas if civil rights, voting rights, immigration, education, health care, women's rights, and care for the elderly. His post-1980 legislative record, oftentimes, was an effort to preserve his pre-1980 accomplishments against conservative efforts to dismantle it and then to incrementally advance or improve on his own pre-1980 accomplishments.
Finally, I have this small observation about the spiritual consequences of Senator Kennedy's death. It lets his brothers rest. Ted's death from natural causes, his death at the end of a long life, seems to be having the effect of healing the nation's shock at the violent deaths of his brothers.
His natural death after a long life is emblematic of a bittersweet notion: He made it. He survived. Unlike his brothers who were killed before they could reach their potentials, and whose deaths left a wound on the national psyche with national grief mixed up with national revulsion, Ted Kennedy lived to carry out his potential, lived to achieve his promise. That fact gives the nation comfort and is a balm. For me, as a fantasy, I can imagine the shades of John and Robert saying, "Thank God Teddy made it. Thank God that Teddy lived." | <urn:uuid:19e52339-4f9c-48e4-ae37-0809ee6664a7> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.politico.com/arena/perm/David_Biespiel_35AE3045-6F37-4830-8EB0-28414C50E35E.html",
"date": "2016-09-26T23:22:49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9827472567558289,
"token_count": 528,
"score": 2.5625,
"int_score": 3
} |
May Be Clue To The Mystery Of SIDS
Finding a pattern
"We detected a pattern of cytokine in the SIDS brain that could overturn a delicate balance in molecular interactions in vital brain centers," says study author Hazim Kadhim, MD, PhD, of Universit� Catholique de Louvain and Free University in Brussels, Belgium. "It seems that high levels of interleukin-1 could be a common denominator in SIDS."
Cytokines like interleukin-1 could be released in the body in response to various stimuli, under infectious or inflammatory conditions, and when there is a lack of oxygen. Cytokines are not always harmful. When cytokines interact with neurotransmitters (substances that send nerve impulses across the brain), the result could change vital functions like arousal responses in the central nervous system, according to Kadhim. These modified arousal responses could cause SIDS.
The ages in each group were not exactly matched in the study. The infants with SIDS ranged six weeks to 10 months in age. The non-SIDS infants ranged one day to 18 months.
An editorial in the same issue of Neurology says the study results are subject to criticism because there is no agreement on what is a suitable control group to compare with SIDS infants.
"Since the SIDS and control infants were not age-matched, it's difficult to say how normal developmental changes in cytokine levels impacted the results," says editorial author Bradley T. Thach, MD, of Washington University School of Medicine in St. Louis, Missouri. "Another crucial question is what is the cause of elevated cytokines in SIDS?"
Detection before baby is born?
Cytokine levels can be checked by examining blood, cerebrospinal fluid and amniotic fluid, says Kadhim. He also said that no studies have yet correlated the levels of cytokines in the brain with those in peripheral blood in SIDS infants.
Some believe that a combination of three conditions is necessary for a SIDS death to occur. In this "triple-risk model," the infant must have a
vulnerability like sleep apnea or low birth weight. A trivial stressor such
as a mild respiratory infection or partial lack of oxygen is often present.
Unexpected death can occur when these two risk factors hit an infant during a critical period of development -- usually between three and eight months. SIDS remains the leading cause of death in infants between one month and one year of age in developed countries. The exact cause of SIDS is unclear. | <urn:uuid:f8402120-010d-4fc3-83bf-6dd90a15992c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.pregnancyandbaby.com/pregnancy/articles/941923/high-levels-of-immune-protein-in-infant-brain-linked-to-sids",
"date": "2016-09-26T22:30:18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9424028396606445,
"token_count": 534,
"score": 3.15625,
"int_score": 3
} |
Is your child gifted? We’ll explore ten of the most common characteristics of gifted children and how schools assess whether students qualify for gifted programs.
The term “gifted” has been thrown around in public education circles for decades – often misused, misdiagnosed and misunderstood. Gifted children may present in various ways; some are positive characteristics and some, are not as desirable. When determining giftedness in a student, it is essential to take a number of factors into consideration, since not all gifted children will exhibit the same characteristics at the same time. This list offers 10 of the most common characteristics seen in gifted students.
Gifted children often begin communicating verbally at an early age, and they use vocabulary far beyond their age. These children are often referred to as “precocious” because of their language usage. The website for Amend Psychological Services list some of the verbal features of gifted children as “avid storytellers,” early talkers or those with and extensive and precise vocabulary. These children often choose their words carefully, but tend to use a lot of them. They can also get frustrated with children in the same age group who are unable to understand them and often turn to older children or adults for conversation.
Education.com states that gifted children often have an “unusual capacity for processing information” and are often able to process that information more quickly and accurately than their peers. These children typically master subjects like reading and math much more quickly than their peers, which can make it difficult to keep them challenged in a regular school setting. Bright Hub Education explains that some gifted children become disruptive in classrooms – often because they are bored with the material that is taught over and over again.
High Curiosity Level
Gifted children often have a high curiosity level and dive into subjects with a passion not seen in most children their age. Amend Psychological Services says it is not unusual for a gifted child to learn the names of all the dinosaurs or the stats for every player on a baseball team at a very young age. Beth Israel Deaconess Medical Center calls this characteristic a “deep absorption in activities that interest them,” and parents of gifted children learn quickly just how saturated that absorption can go, when they have to take a child to the library or help them find facts on the Internet over and over again.
Gifted children are often able to retain information faster and for longer periods of time than average children of the same age. Their rapid learning ability allows them to process facts quickly and retain them for efficient recall later on. High memory retention combined with fast information processing often means these children learn subjects at a rapid-fire rate that can make it challenging for parents and teachers to present information to gifted children as fast as they like.
Intensity and Persistence
Many gifted children are intense in the way they learn, which is often why they pick up large amounts of information so quickly. They can also be intense socially, with acute sensitivity to the needs and feelings of others, according to Education.com. These children are able to show compassion to others at a much deeper level than other children their age. However, the intensity and persistence can also work against a gifted child on occasion, when the child encounters a problem he cannot easily solve or a topic he cannot seem to master as quickly.
Sense of Humor
Gifted children are enjoyable to be around because many exhibit a sense of humor that goes well beyond their years. Bright Hub Education states that these children often have a special appreciation for more subtle types of humor like satire. They also enjoy plays on words, such as puns, and are particularly adept at using these comic techniques themselves. Whether their sense of humor comes out in their conversation or their writing, these students can be a joy to converse with.
Sense of Justice
Gifted children often have an acute sense of justice, which can translate to high expectations of themselves and others. While their strong moral compass can make them effective leaders, and ensure good choices in many situations, this characteristic can also make it difficult for them to forge long-lasting relationships with others. These children often become interested in justice and fairness at a very early age, which continues throughout their lives.
Gifted children often exhibit a strong imagination, with an ability to spin tales that parents and teachers do not necessarily expect. Education.com says these children often show originality in their oral, written or artistic expression and are viewed as highly creative. Gifted children may spend time fantasizing, and are often categorized as independent thinkers.
Children who fall into this group may have the ability to pick on details much more acutely than other children in the same age bracket. Whether reading a book, watching a movie, gifted students often notice seemingly nonessential pieces of information that others might miss. Their attention to detail often results in long, drawn out renditions of situations or conflicts – a frequent source of frustration for parents and teachers at times.
Problem Solving Capabilities
Often perceived as effective problem solvers, gifted children typically relish nothing more than breaking down a complex issue and finding a solution that no one else has every thought of. These children, according to Education.com, have an “advanced cognitive and affective capacity for conceptualizing societal problems” – the potential leaders of the future.
Labeling a child as “gifted” is a somewhat complex process that involves careful observation and objective testing in most cases. While this list is not an exhaustive one, it does provide insight into some of the most common characteristics of gifted children to help teachers and parents know whether further assessments are warranted. | <urn:uuid:ac0580af-66b3-4c86-a955-650b160c7746> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.publicschoolreview.com/blog/10-characteristics-of-the-gifted-child",
"date": "2016-09-26T22:27:13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9746552109718323,
"token_count": 1145,
"score": 3.703125,
"int_score": 4
} |
Do the math… up at 6:15a.m., do what you gotta’ do, out of the door and on you’re way to school by 6:30a.m. The first bell rings at 7:15a.m. and the average high-schooler’s daily routine has only just begun.
We teenagers really aren’t getting enough sleep, and because the schools aren’t doing too much about it, parents have to work harder to monitor their kids’ sleeping patterns and spend more time making sure they stay in top shape throughout the day. The average high-schooler goes to sleep around 11:30 on a school night. Because we wake up so early, we are thereby only given more or less six hours of sleep per school night.
According to the American Sleep Disorders Association, the average teenager needs around nine hours of sleep per night, mostly because hormones important to an adolescent’s growth are being released during slumber. Unfortunately, although we high-schoolers appear to be awake in school, our brains are not actually fully functioning until about 8:30a.m., which puts us around the middle of our second period classes.
A special report released by CNN from the Mayo Clinic concurs with this amount of sleep required for teens. Obviously it isn’t too feasible that we could be going to sleep at around 9:15p.m., considering all the homework and extra-curricular activities we’re involved in these days. Although parents with busy kids cannot have them go to sleep so early, there are still many ways to greatly improve sleep habits and patterns.
The Mayo Clinic’s study adds that a teen’s increased need for sleep is unfortunately hampered by his or her ability to fall asleep at such an early hour. One major reason for this is that electronic devices are impacting teenagers’ lives significantly. Studies indicate that students with four or more electronic entertainment devices in their bedroom (iPod, laptop/computer, cell phone, portable gaming device, etc.) are twice as likely not to fall asleep at an appropriate and recommended hour (9 o’clock). Getting enough sleep as an adolescent is extremely important to our daily lives and overall health. Mary Carskadon, director of chronobiology at E.P. Bradley Hospital in Rhode Island, was interviewed by PBS.
One question asked was, “How much sleep are adolescents getting?” She answered with the following:
“In our surveys and in our field studies, we’re seeing that, on average, teens are getting about seven-and-a-half hours a night’s sleep on school nights. And actually a quarter of the kids are getting six-and-a-half hours or less sleep on school nights. So when you put that in context of what they need to be optimally alert, which is nine-an-a-quarter hours of sleep, it’s clear that they’re building huge, huge sleep debts, night after night after night.”
Mary Carskadon continues to discuss that, “The problem is worst for teenagers in the morning. Fundamentally, the issue is they’re not filling up their tank at night, and so they’re starting the day with an empty tank. Interestingly, there is another part of their brain that’s the biological timing system, or circadian clock, that actually helps to prop them up at the end of the day. But when they start the day with the empty tank and there’s no biological clock helping them in the morning, they really should be home in bed sleeping, not sleeping in the classroom.”
For the last few nights, I have gone to sleep earlier to test the difference in my daily ability during school hours. After getting approximately nine hours of sleep for five days in a row, I felt significantly better in the morning, and was able to pay attention much better during all of my classes. I was much less irritable, I felt extremely refreshed, and felt much better all around. Just from my personal tests and own experiences, I know that going to sleep earlier every night strongly affects how you react to certain things the following day.
OK, so now we know sleep is important, really important- and your kids are just not getting enough of it. Now what?
First if your teen is not getting enough rest at night, it affects his/her schoolwork and grades significantly. If we try to study late at night as opposed to earlier in the day, we will not be able to retain most of the information, as the only place it could possibly go to is short-term memory.
Sleep depravation affects emotions as well. If we wake up without enough rest, we normally feel depressed, distressed, and usually in an irritable mood. Our relationships with out peers, teachers and especially parents can suffer.
Sleep depravation also can interfere with our coordination. We can’t think quickly enough to make important decisions, our reaction time is bad, and our joints and muscles are affected. So if your teen is driving to school, that’s definitely something you need to keep a closer eye on. Sleep deprivation hampers our everyday motor skills and is a huge concern when teens are handling mechanical devices, especially cars.
According to many different studies and lots of research done over the last twenty years, there are five main actions your child can take toward making better habits:
- Decide which after-school activities are feasible and work with your teen’s schedule.
- Create a more relaxed evening atmosphere so your teen can easily fall asleep, even if it means collecting technological entertainment devices at night.
- Be consistent and establish a regular bedtime and wakeup schedule for week-days. Remember, we cannot catch up on sleep in just one night. The process is much more gradual and long-term than you may think.
- Learn how much sleep your teen personally needs to function at his/her best.
- Use bright light in the morning to signal the brain when it should wakeup and when it should prepare to sleep, by turning off all lights and music. Light has also been known to stimulate certain emotions in the morning as well which has seemed to help many people wake up in the morning.
When studying, doing homework, and thinking about our overall performance in school, it’s important to consider our sleeping habits. We all enjoy staying up late on AIM, Facebook, talking on the phone, listening to music, and playing videogames, but the trade-off is huge. So whenever possible, make sure you’re giving your teens a break and get them to sleep! | <urn:uuid:c89f847e-c91d-4ff0-8ef2-92b9a0e234b6> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.radicalparenting.com/2011/02/16/teenage-dream-get-them-to-bed/",
"date": "2016-09-26T22:26:33",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9665996432304382,
"token_count": 1392,
"score": 3.109375,
"int_score": 3
} |
For all the hyperbole devoted to Athlons and Pentiums, the hard drive is still the single most important component in a computer system. A faster hard drive makes more difference to the usability of a system than any other component. (Except RAM, of course, but RAM is very boring — if you can remember the three words "more is better" then you know almost all you ever need to know.) The extraordinary thing is that there is no recognised standard single measure of hard drive performance.
Measuring and comparing performance is always problematic, but it's particularly so with hard drives. There are innumerable hard drive benchmark testing programs, but despite some valiant efforts, none of them are particularly well-respected. Because all a computer's components interact with the hard drive, and because different users and programs use the drive in different ways, and most of all because this crazy industry keeps moving the technological goalposts, a software test that's almost fair on 1997 hardware can be all but useless with current kit — and probably will not run at all on 1991 equipment! To make matters more difficult still, how can you be certain that a Windows-based test is an appropriate predictor for Macs or Unix systems, or vice-versa?
But all this is to ignore the real and fundamental underlying difficulty, one that very few researchers seem to appreciate, let alone attempt to deal with in a comprehensive way. It is this: the real effect of computer performance in general and of hard drive performance in particular is the effect on the human being.
Outside of certain specialised industrial and scientific applications, the only purpose in making hard drives faster is to please the human being sitting at the keyboard. ure, faster drives make happier users, but how much faster? Faster in what way? And what type of user? Hard drive performance measurement, in short, is not purely technical. Nor, of course, is it purely psychological: it is a little of both.
The first difficulty we have to deal with, if we are to make drive performance measurement more relevant to the human being, is the way that most benchmark numbers scale. They can be quite useful for comparing two or three quite similar drives, within, say, 10 or 20% of one another, but give counter-intuitive and almost meaningless results for drives of different generations or different market segments.
Like most physical measurements of matters which are, in the final analysis, perceptual, benchmark scores also tend to inflate the higher scores. As an example of this, consider using horsepower as an indication of the speed of your car. Sure, a 200 horsepower motor will make it go faster than a 100 horsepower motor, but not twice as fast.
Similarly, a computer with a 2000 MHz CPU is nothing like twice as fast as the same machine with a 1000 MHz chip. Partly this is to do with the fact that (in this example) we have only changed one part and the rest of the system — RAM, hard drive, video card and so on — is no different.
(Of course, this is exactly what you have to do when you are benchmarking. Though very dated now, some of our old CPU and motherboard performance tests explored this in more detail and Ace's Hardware developed quite a name for looking intelligently at the interrelationships between components.)
However, even if we double all the components: plug in a twice-as-fast hard drive, twice as much RAM, and so on, we still don't get a machine which is twice as fast. This is because our perception of computer speed, like our perception of most things, is not arithmetic, it is logarithmic.
An example: a noise which sounds just barely louder than another noise is actually about twice as loud (i.e. it has twice as much energy in the air vibrations). Our ears can't tell the difference between a sound and a second sound which is 20 percent louder. (If this seems absurd, take a look at any introductory sound engineering or psychology textbook.) Sensibly, audio engineers don't usually measure sound pressure levels directly, they measure them on a curving, logarithmic scale which "seems straight" to the human ear. This is has the very useful result that you can measure any two sounds using the audio decibel (dBA or just dB for short), no matter how loud or how soft, and know how far apart they are: a 3dB difference is only just noticeable if you concentrate hard, a 6dB difference (twice as much actual power) is noticeable under normal conditions, and so on. It doesn't matter if we are measuring the 20dB whisper of the breeze on a summer day or 98dB of a rock band. The audio dB, in other words, like all good measurements scales properly.
By the way, it's common to use a log scale to measure all sorts of things, not just sound volume. Examples are as varied as the dBV (for measuring voltage), the Beaufort Scale (for measuring wind force), and the Richter Scale (for measuring earthquakes). Indeed, even the musical scale works rather like this — it is measuring frequency not volume, but each octave is twice as large as the one before.
In summary, we need a measurement that is:
- Consistent across a wide range of drives: past and present, fast, slow and middling
- Scales appropriately, so that a drive that "feels" 20% faster to the average user gets a 20% bigger number.
- Anomaly free
With all this in mind, we can return to the problem of measuring hard drive performance. It really ought to be very simple! Ignoring two or three very minor factors, there are only three things that determine hard drive speed, and all three are very easy to measure — so much so that in practice we can usually accept the manufacturer's published claims for them. (But with care — some manufacturers cheat!) The drive has to:
- Move to the right part of the disc (seek time).
- Wait while the disc spins until the first byte of the desired data passes under the read head (latency).
- Suck the rest of data off the disc as fast as possible (Data Transfer Rate or DTR).
That's all there is to it. (We are ignoring relatively trivial factors like caching, external data rate and head switching; these are discussed elsewhere.) Seek, latency and DTR are public figures and easily verifiable. And yet there is no commonly accepted single number to describe hard drive performance. In contrast, despite the masses of talk surrounding the issue, no-one seriously argued with the major public benchmarks for CPU performance until a certain CPU manufacturer bought shares in the benchmark publishers and put its thumb in the scales. (No names here, let's just say its initials were "Intel".) Up till then though, things like Business Winstone 98 were really fairly decent guides. (Note that we are talking about real work here, not games.)
Hard drive designers no doubt spend a lot of time and money investigating the theoretical relationship between the three main determinants of hard drive performance, but the intricacies of this are not very relevant to us. We just want a single-figure real-world guide. In any case, real-world drives have strongly cross-correlated key performance factors. In other words, drives with good DTR tend to have low seek times and latency, and so on. This is for both technical and commercial reasons: remember that like everything else in this industry, hard drive performance is as much a product of social and economic factors as it is of technical ones. In other words, understanding the theoretical relationship between DTR, seek time and latency in the development lab is not particularly useful, as it doesn't tell us much about the small subset of all technically possible products that actually gets released onto the market.
But giving a real-world single-figure indication of drive performance should not be difficult! Any experienced computer techie can estimate a drive's performance fairly accurately just by sitting at the keyboard for a minute or two. Reasonably keen but non-technical computer users soon detect a difference if you swap in a significantly faster or slower drive. In conversation with other computer people, it's commonplace to agree on the merits or shortcomings of a particular model — much more so than with, say CPUs or video cards, about which even the experts disagree.
In the old days, the mid-eighties, let's say, it used to be common to just quote the seek time as a single measure. The habit came about because (back then) nearly all drives ran at 3600 RPM and thus had identical latency, and nearly all drives had exactly the same DTR: 5 Mbit/sec was an interface limitation of the old MFM controller. Even the handful of faster transfer drives only had 7.5 Mbit/sec DTRs, so seek time really was a pretty good descriptor. Not any more!
But the habit dies hard, of course. It's still quite common for non-technical people to ask about seek time thinking it equals performance. It stopped being very meaningful around about 1990. Seek time still has some validity as a single measure but only because it is strongly cross-correlated with latency and DTR in commercially successful drives, and because typical DTR has become so high that is a lesser factor than it used to be. Usually, if the drive maker has spent all that money on giving a drive a fast seek time, they will have spent money on getting decent latency and a good DTR too.
Although drive latency is almost never quoted on its own, RPM figures are, and they have the exact same meaning. This is because latency is determined only by RPM: in other words the two figures are simply different ways of expressing the same thing. It is quite common to use RPM as a rough guide to performance. As a rough and ready measure, it's not bad at all. But it can't make fine distinctions, and like all single measures it can be quite misleading.
|Latency for common spindle speeds (ms)|
In lieu of anything better, some people use the internal data transfer rate (DTR) as the closest thing to a single figure description — it does correlate pretty well with actual performance (about 0.9). And, of course, it cross-correlates reasonably well with seek time and latency too — there is no technical reason why it has to, but few competent drive manufacturers spends millions developing a hard drive with top class data transfer and very poor seek time.
But just taking the DTR is not always accurate: some drives with low DTRs go rather well, and some with high DTRs are not as good as you'd expect. There is an interaction.
|Three equal speed drives with quite different performance characteristics
|Model ||Data rate||Seek||Latency||Performance
The three drives above are from different eras and market segments but have roughly equal performance (something you can verify quite easily by trying them out in practice and comparing with a few much faster and much slower units). As you can see, they use three very different ways to kill the same cat. The 2217 has little more than half the data transfer rate of the Bigfoot, but holds its own because of its faster seek time and much better latency; the Quantum's excellent DTR makes up for its slow spin (i.e. high latency) and slowish seek time. The Seagate is average in all respects, and all three are about the same speed. (The alert reader will notice that we have slipped in a fast one here: we are using our own speed measurement in a kind of self-justification. If it bothers you enough, go get hold of some drives and run some other speed test on them — Winbench or whatever you like. You'll come up with broadly similar results.)
Although we have dismissed each of the three main single measures in turn, we have seen nothing to suggest that we need to introduce a fourth variable. Seek time, latency and DTR are clearly the key factors to consider. Is it possible to find some way of combining them to produce an accurate composite measure?
The first step is obvious: add the seek time and the latency together. If you think about how a drive works, you can see that it doesn't really matter if it has 15ms seek time and 5ms latency, or 5ms seek and 15ms latency: either way, the net average delay before the drive starts reading data is 20ms. (There is a complicating factor here to do with the non-random distribution of data, but we won't get into this just yet.) This yields access time. (Technically, "access time" is seek plus latency plus the various electronic delays involved, but these are so small that we can ignore them for now.)
That leaves us with just two variables. It should then be a simple mathematical process to discover the correct way to combine access time and DTR to produce a performance rating. The normal method is to take some sample measurements, find a formula for access time and DTR which produces the same answers as the actual measurements, and then predict some other measurements with it. If the predictions are close to the actual measured results, then the formula is correct and we can use it.
Unfortunately, we can't do this, because there is no standard, uncontroversial way to measure drive performance. We are right back with the problem we originally started with! We can make all the mathematical predictions we like, but there is no performance equivalent of the 12 inch ruler or the chemist's scales, and there is, therefore, no way to check the results. In the end, what we are measuring is as much psychological as it is technical: how much faster will the drive seem to you or me?
Now at last we are on firmer ground. There is no doubt, for example, that the Western Digital Caviar 140 was faster than the WD93044, or that there was very little difference between the Seagate Medalist 1720 and the IBM Deskstar 2. Similarly, just by sitting at the keyboard, we can easily tell that our old Seagate Cheetah 1 is still faster than any of the IDE drives made until quite late in the '90s.
There are many possible ways to combine DTR and delay to produce a single figure, of course. The easiest way to test them is to use a spreadsheet or statistical program to produce performance tables for some well-known drives. The majority of these possible combinations can be eliminated at a glance — the end figures obviously bear little relationship to reality. For the small number of transformations that make sense on first sight, closer inspection is required.
We need to pay particular attention to unusual cases: very fast or slow drives, and ones with an unusual mix of performance characteristics — these are the ones most likely to show up weaknesses in the formulae. It's particularly useful to do blind testing. (Or at least as close a thing to blind testing as possible, given the measurement difficulties we've outlined above!) By this we mean selecting a drive, estimating its performance rating from experience with it, then using the formula to calculate the actual performance rating. If the calculated result is close to the estimated result, then its evidence in favour of the formula under test. If it is surprising, then either your estimate was out, or the formula could use revision.
Of the many dozens of formulae we tried, one clearly gives the best fit: 2 log (DTR) / √ (access). So far, it's the only one we've found which seems to work with consistent accuracy, though there may well be others which are equally as good or better. We make no claim for a theoretical basis behind it, and in fact suspect that it will need modification to cope properly with the ever-faster drives that will be released in years to come.
We'd also expect that a truly universal formula of this nature would be generalizable to other, non-drive storage devices: this formula clearly is not — if you are interested, try plugging in the data transfer rate, seek and latency of a floppy drive: the results seem meaningless. It does, however, work very well in the range for which we are interested: hard drives from ST-412 to Cheetah X15-36LP. If and when we find a better way to express hard drive performance, we'll switch to it. It's not very meaningful to compare, say, a Cheetah 1 and an ST-225 directly, of course, but comparing either of them with drives of not too dissimilar a vintage works well and produces few surprises.
Of course, no single figure can hope to describe a drive's performance in the same way as the DTR, seek, and RPM (i.e. latency) figures we try to provide for all the drives we've listed; it can give you a rough idea of the general performance of the drive, but provides little of the flavour that the detailed figures add. For example, returning to the three more or less equal speed drives in the table above, you can see that the Bigfoot is easily the quickest if you mostly play big A/V files, that the Micropolis would be much better for for database work, and you'd prefer to have the Seagate for more general purpose tasks. | <urn:uuid:8f3c7d47-2790-4d79-9781-b786ba4c41f0> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.redhill.net.au/d/d-speed.html",
"date": "2016-09-26T22:24:32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9600209593772888,
"token_count": 3565,
"score": 3.203125,
"int_score": 3
} |
See what questions
a doctor would ask.
These medical condition or symptom topics may be relevant to medical information for Positive myoclonus:
Positive myoclonus: Positive myoclonus is listed as a type of (or associated with) the following medical conditions in our database:
Myoclonus (medical condition): Sudden involuntary muscle twitching or movement.
Myoclonus (medical condition): Everyone has muscle twitches, such as hiccups or sleep starts, but clinical myoclonus is more severe.
Myoclonus: Myoclonus is a term that refers to brief, involuntary twitching of a muscle or a group of muscles. It describes a symptom and,... (Source: excerpt from NINDS Myoclonus Information Page: NINDS)
Myoclonus describes a symptom and generally is not a diagnosis of a disease. It refers to sudden,... (Source: excerpt from Myoclonus Fact Sheet: NINDS)
Search to find out more about Positive myoclonus:
Search Specialists by State and City | <urn:uuid:4abf768e-51ad-423b-929b-b9ae762778c7> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.rightdiagnosis.com/medical/positive_myoclonus.htm",
"date": "2016-09-26T22:32:25",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8854433298110962,
"token_count": 228,
"score": 2.640625,
"int_score": 3
} |
What is the Latitude and Longitude on Kenya?
Latitude of Kenya is:1 and Longitude of Kenya is: 38
A country of 47 districts, Kenya is located in East Africa. It is named after the Mount Kenya, the 2nd highest peak in Africa at a height of 5,199 metres. Kenya is spread over an area of 580,367 square kilometres with a population of approximately 41,000,000. Kenya is the largest economy in East Africa. Nairobi is the capital of Kenya and also the most populous city in the country.
The geographical coordinates of Kenya are 1 North and 38 East. This country shares its boundary with Ethiopia in the north, Tanzania in the south, Somalia in the northeast and Uganda in the west. Great Rift valley divides the highlands of Kenya. Mau forest is the largest forest in Kenya's neighbourhood.
Tourism sector generates large revenues for the economy of Kenya. Game reserves and sandy beaches attract tourists from all the parts of the world. Agriculture is also the very significant part of the country's economy. Tea, coffee and flowers are some of the major exports of the country. Industrially, it is the most developed country in East Africa.
The climate changes from tropical to arid as we move from coastal plains to northeast Kenya. Temperature remains warm throughout the year. Winters are mild and much to the liking of inhabitants.
Kenya has diverse culture dominated by music, sports, art and literature. Sports like cricket, football rugby and boxing are very popular among the youth of Kenya. In events like Olympics and commonwealth games, Kenya's athletes have made big names for themselves.
View Latitude and Longitude on Kenya in other units. | <urn:uuid:142205a8-25a1-431c-95df-b1a4ed7f822c> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.roseindia.net/answers/viewqa/TravelTourism/20328-Latitude-and-Longitude-of-Kenya.html",
"date": "2016-09-26T22:33:06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9509474635124207,
"token_count": 348,
"score": 3.390625,
"int_score": 3
} |
What do you think?
Let us know if think moving this whale into the bay is a good idea in our comments section below.
A few hundred spectators gathered at Fiesta Island’s shoreline on Wednesday afternoon for the arrival of a bloated fin whale, and to watch scientists gather data about what killed the 67-foot creature.
The research team will get even more information than they initially thought thanks to an announcement by Richard Branson’s Virgin Oceanic organization that it would pay for towing the whale to sea so researchers can observe the slow decomposition process created by a “whale fall.”
It took more than six hours to tow the carcass from Point Loma, where it was discovered Saturday, to the calm water of Mission Bay for the equivalent of an autopsy. After about an hour of tugging by three tractors chained together, city crews were unable to move the whale above the waterline as project managers had planned.
So, roughly two dozen researchers and assistants in yellow waders and rubber boots flocked to the water’s edge and started removing large chunks of skin and blubber. They exposed enough bone to determine that whale was hit by a ship, a common cause of death for the second-largest species of marine mammals.
“There was fracturing on about four meters of the whale’s vertebral column,” said Siri Hakala, a biologist for the National Oceanic and Atmospheric Administration.
Her colleagues gathered numerous tissue and organ samples in what counts as a windfall for marine biologists interested in whale DNA, hormones and numerous other measurements they rarely get to take.
Scientists only had a few hours of daylight for the smelly and physically demanding job — less than optimal. But they were buoyed by Virgin Oceanic’s plans for disposing of the body.
“The most ecologically responsible thing we want to do is put the whale back in the ocean,” said Eddie Kisfaludy, operations manager for Virgin Oceanic in San Diego. “We’ll tie onto it, drag it off La Jolla — about five miles offshore — and add about four tons of steel to it that will hopefully sink it in 2,500 feet of water.”
As the carcass decomposes, it should attract all kinds of sea life and become a sort of living laboratory populated by various fish, shrimp and bacteria. Of course, the process happens all the time in the ocean but rarely do scientists know exactly where to watch it unfold.
“All of those things are very interesting to science because we know very little about the deep sea,” Kisfaludy said. “Taking advantage of an opportunistic situation is what we are doing.”
He said the move comes at “significant cost” for Virgin but he won’t know what that is until the operation is over. Kisfaludy said the effort is part of a much larger vision by Branson to “get the world excited about exploring” the ocean depths.
Until Virgin stepped in, city officials had planned to cut the whale up and dump it in a landfill. Some residents found the idea unnatural but federal marine biologists said it was best to prevent the whale from washing up somewhere else.
The carcass is expected to remain on Fiesta Island over Thanksgiving until it’s towed to its final resting place on Friday. Federal fisheries agents said it is protected by federal law and police will patrol the area to make sure no one messes with it.
Wednesday afternoon, the beach was abuzz with interest.
“It is an amazing, amazing event,” said Arlene Gnade of Pacific Beach as she sat on a rock and watched the carcass float toward shore behind a lifeguard boat. “It’s sad, really sad. It’s not how it was meant to be.”
Gnade looked up and down the beach, where people stood with their dogs, kids and bikes to observe the spectacle. Some planned to be there. Others happened across the event and stayed to watch.
“Everybody is interested in this — kind of bonded in a way,” Gnade said. “I think we are connected to (whales) in ways we don’t understand.”
A few yards away, Joseph Woolfolk of Milwaukee shot photos after waiting 90 minutes for the whale to arrive. He’s visiting family for Thanksgiving and took the opportunity to witness something he can’t on the Great Lakes.
“This is one of the largest creatures in the world,” he said. “I am just really impressed.”
For San Diego lifeguards, the adventure started before 7 a.m. as they maneuvered a heavy-duty boat near the whale, which was first reported Saturday on the rocky shoreline beneath the Point Loma Wastewater Treatment Plant.
Lifeguards floated on rescue boards and swam as they latched a tow line to the whale’s tail.
“It was a challenge for the guards on the water because the tail was so big and so difficult to handle,” said lifeguard Lt. Gary Buchanan.
The boat got underway with the help of the high tide lifting the carcass off the shoreline. But it didn’t take long for the operation to hit a speed bump: “A fog bank came in right off the bat,” Buchanan said.
The boat headed south to avoid getting entangled in the kelp beds or lobster traps, a difficult task made more so by the limited visibility. “It was super slow going, but then we turned north and headed to Mission Bay,” Buchanan said. “We were making about 2 knots — pretty decent. The seas were pretty calm.”
As the boat approached the bay entrance, the tide was rolling out and working against the boat and tens of tons of additional drag. “Imagine us … going against a river,” Buchanan said as the vessel neared the beach. “We were battling for the last couple of hours. … It’s just been a slow and steady long tow.”
RelatedCopyright © 2016, The San Diego Union-Tribune | <urn:uuid:932f8fd5-8c60-4db1-8b03-b40e323495ec> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.sandiegouniontribune.com/news/environment/sdut-whale-hit-ship-destined-deep-sea-research-site-2011nov23-htmlstory.html",
"date": "2016-09-26T22:46:28",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9669291973114014,
"token_count": 1301,
"score": 2.5625,
"int_score": 3
} |
versión On-line ISSN 1996-7489
versión impresa ISSN 0038-2353
S. Afr. j. sci. vol.105 no.9-10 Pretoria sep./oct. 2009
E.M. BordyI, *; A.J. BumbyII; O. CatuneanuIII; P.G. ErikssonII
IDepartment of Geology, Rhodes University, P.O. Box 94, Grahamstown 6140, South Africa
IIDepartment of Geology, University of Pretoria, Pretoria 0002, South Africa
IIIDepartment of Earth and Atmospheric Sciences, University of Alberta, 1-26 Earth Sciences Building, Edmonton, Alberta T6G 2E3, Canada
Complex structures in the sandstones of the Lower Jurassic aeolian Clarens Formation (Karoo Supergroup) are found at numerous localities throughout southern Africa, and can be assigned to five distinct architectural groups: (1) up to 3.3-m high, free-standing, slab-shaped forms of bioturbated sandstones with elliptical bases, orientated buttresses and an interconnecting large burrow system; (2) up to 1.2-m high, free-standing, irregular forms of bioturbated sandstones with 2-cm to 4-cm thick, massive walls, empty chambers and vertical shafts; (3) about 0.15-m to 0.25-m high, mainly bulbous, multiple forms with thin walls (<2 cm), hollow chambers with internal pillars and bridges; (4) about 0.15-m to 0.2-m (maximum 1-m) high, free-standing forms of aggregated solitary spheres associated with massive horizontal, orientated capsules or tubes, and meniscate tubes; and (5) about 5 cm in diameter, ovoid forms with weak internal shelving in a close-fitting cavity. Based on size, wall thickness, orientation and the presence of internal chambers, these complex structures are tentatively interpreted as ichnofossils of an Early Jurassic social organism; the different architectures are reflective of the different behaviours of more than one species, the history of structural change in architectural forms (ontogenetic series) or an architectural adaptation to local palaeoclimatic variability. While exact modern equivalents are unknown, some of these ichnofossils are comparable to nests (or parts of nests) constructed by extant termites, and thus these Jurassic structures are very tentatively interpreted here as having been made by a soil-dwelling social organism, probably of termite origin. This southern African discovery, along with reported Triassic and Jurassic termite ichnofossils from North America, supports previous hypotheses that sociality in insects, particularity in termites, likely evolved prior to the Pangea breakup in the Early Mesozoic.
Key words: terrestrial trace fossils, social insects, southern Gondwana climate, Early Jurassic, Pangea, Karoo
Here we document, for the first time, the range of possible trace fossil architectures found at different localities in the Lower Jurassic sandstones of the main Karoo Basin in southern Africa (Fig. 1). We tentatively suggest that these well-preserved architectural forms are likely the best preserved Early Jurassic social insect traces in Gondwana to date, probably made by a soil-dwelling social organism that was possibly related to termites. Conservatively, the body and ichnofossil record of termites is traced back only to the Cretaceous period,1,2 however, it appears that the termite ichnofossil record may have a pre-Cretaceous origin as there is a growing number of Triassic and Jurassic trace fossils that have been attributed to termites.38 This current interpretation would thus further support the hypothesis that explains the worldwide distribution of social insects, in particular termites, by their early Mesozoic origin, prior to the breakup of Pangea.4,918
The structures under investigation are distinct associations of tunnels and spheroidal features of various sizes, constructed from, or excavated within, fine-to medium-grained sandstones in the uppermost part of the Lower Jurassic (Hettangian to Pliensbachian) Clarens Formation. This predominantly aeolian formation (part of the Karoo Supergroup) was deposited throughout southern Africa in a wet to dry desert environment with dominant easterly palaeowinds before the outpouring of the Karoo continental flood basalts 183 ± 1MYA.1922 Based on their distinctive architectures, the structures are classified into five groups (Table 1), of which Groups 1 and 2 are very similar to large, elaborate, free-standing, strongly bioturbated (tunnel diameter ~0.5 cm) sandstone pillars from the Tuli Basin,7 and therefore their description is only summarised here. Groups 3 and 5 have not been reported elsewhere, and elements of Group 4 were recently reported in Bordy.22
Group 1 (Figs 2a and 3a) are large (up to 3.3 m), laterally-flattened sandstone pillars with strong northsouth orientation, with or without side buttresses. Group 2 (Figs 2b and 3b) are more irregular pillars of larger diameters (up to 1 m), but smaller vertical dimensions (up to 1.2 m) than those in Group 1. In addition, Group 2 pillars have thick, massive walls and a series of empty chambers and shafts in their interior. Associated with the sandstone pillars of both Groups 1 and 2, tubes with meniscate fill (average diameter 2 cm) have also been observed. It is noteworthy that Group 1 and 2 forms are preserved poorly in the main Karoo Basin due to enhanced weathering, compared to the present day more arid Tuli Basin which lies more than 500 km to the north. At Site 1 (Fig. 1), 53 Group 1 pillar orientations have a mean vector of 359.22º, comparing favourably with the mean orientation of 357.39º in 153 analogous pillars in the Tuli Basin as well as with the orientation of recent northern Australian termite nests.7 Group 2 structures (Sites 1 and 6) in the present study area have a network of large, anastomosed, hollow tunnels, 520 cm in diameter (Fig. 2c) that seem to be underlying the free-standing nests and are occasionally subhorizontal. These anastomosed tunnel walls are 2.53.5 cm thick, and have a smooth exterior and locally bioturbated inner surface. The individual tunnels have a fairly constant diameter.
Group 3 ichnofossils (Figs 2d and 3c) are distinguished from Group 2 structures based on differences in internal morphology and bulbous external shape. Group 3 forms are shorter (average 1525 cm) and have thinner walls (less than 2 cm, Fig. 2d) with a rough, sugary interior surface, lining highly irregular cavities. The cavities are hollow, but rare, broad, upward-tapering 'props' (less than 10 cm high) and irregular bridges (12 cm wide) may be preserved within them (Fig. 2e). Horizontal and subhorizontal hollow cylinders with external diameters (46 cm) are also associated with these cavities (for example, Sites 3 and 4).
Group 4 ichnofossils are an association of spheres (0.54 cm) and horizontally-orientated capsules (0.5 cm in diameter and 15 cm in length), and usually are exposed in ~20 m2 horizontal surfaces (Fig. 2f; see Bordy21 for details). Locally, small (1520 cm high and 2030 cm wide) columns of the small spheres emerge from the surface where the abundance of solitary and coalesced spheres is highest (Figs 2g and 3d). These columns have a partially-filled shaft in their interior. Locally, smooth-walled, horizontal tubes, 8 cm in diameter, extend laterally from the upright columns and are filled with sandstone bioturbated by a network of interconnected, fine galleries of 0.20.3 cm in diameter.
At Site 1, part of a larger columnar structure, consisting entirely of spheres of various sizes, is 1 m tall and 1.7 m wide. The spheres within the 'wall' were sorted neatly from the larger (3 cm to 4 cm diameter) spheres on the outer edge (Fig. 2h), to smaller (about 0.5 cm diameter) spheres on the inner edge. Intermediate-sized spheres occupy the central portions of the 'wall'. The surface, from which this 'wall' remnant protrudes, is covered by parallel, strongly northsouth oriented, semi-connected capsules and tubes (about 0.5 cm in diameter), resembling an oriented, semi-connected tunnel system cast in sandstone (Fig. 3d). Surfaces identical to this, and with an approximate northsouth orientation, were also observed in Tuli Basin, at Site 3 (main Karoo Basin) and numerous other localities in South Africa where they co-occur with meniscate tubes (the latter having a uniform diameter of 1 cm) and other probable ichnofossils (see Bordy22 for details).
Group 5 ichnofossils were observed in eroded vertical surfaces showing an egg-shaped chamber (maximum diameter of 5cm) with poorly-developed internal shelving in a close-fitting cavity (Figs 2i and 3e). This oval form is associated with 34 cm diameter irregular caverns, and a multitude of small, circular openings 0.3 to 0.5 cm in diameter, forming a contorted, interconnected passage system within the sandstone matrix.
Many organisms disturb the soil, on large and small scales. The Clarens Formation structures described here may have several different origins. Those that show a defined concentration of intense bioturbation in an otherwise undisturbed (bedded) sandstone are suggestive of traces of soil-dwelling life forms, most probably of social organisation. Among modern organisms, some termites construct very similar, but not identical, nests to the architectural features of these Early Jurassic structures.
Details on the similarities of the elaborate large pillars of Groups 1 and 2 to modern termite nests, and, in the case of Group 1, their resemblance to the magnetic mounds of Australia (Termitidae), are discussed in Bordy et al.7. Here we also consider that certain, especially the more primitive termites, are wood inhabitants,23 and thus it is possible that the pillar-shaped occurrences of Group 2, and the network of large, anastomosed, hollow tunnels and horizontal components of the structures associated with Groups 2 and 3 (Figs 2a and 2c), are possibly remains of termite-infested wood especially when considering that fossil tree stumps in the Clarens Formation are present both in South Africa and Lesotho.2426 If these structures were associated with wood-inhabiting termite species, the pillar-like sections (i.e. the tree stumps) probably were occupied as the main nest, and the radiating anastomosed and (semi-)horizontal tunnels formed part of the termite-infested tree root system. Alternatively, the larger horizontal and semi-horizontal tunnels may be interpreted as subterranean passageways or burrow systems, which probably were used for interconnecting adjacent, related nests or calies (as described by Noirot and Darlington27 for modern termites and by Direnger et al.2 in Miocene fossil termites), even though the size differences are apparent. It has to be emphasised that, other than the overall shape or external morphology, any direct evidence to substantiate the above interpretations is absent.
Furthermore, resemblance in the external morphology between Group 3 forms and modern nests of Odontotermes latericius is also present to some degree; however, these seem to differ internally as the cross-sectional exposures of the Group 3 structures lack the preservation of any centralised nest cavity, which is an integral part of the nest of the modern taxon. There is also some resemblance in the external morphology of the structures built by the modern harvester termites (Hodotermitidae) in South Africa28 and the semi-spherical chamber with horizontal galleries (internal shelving) of Group 5. However, the diagnostic horizontal shelving of the hive is not very well-preserved in our case, probably partially due to the coarse nature of the host sediment and weathering. Furthermore, the size differences are also notable, though it is possible that our structure represents small hives.
Group 4 structure morphologies do not correspond to any known modern nest forms;22 however, identical structures were interpreted as fossil termite calies from the late Miocene of Ethiopia.6
Unquestionably, and partly due to the fact that the very nature of trace fossils in general poses difficulties in their interpretation, the Clarens structures cannot be assigned unequivocally to termite origin, in contrast to more recent termite ichnofossils,2,29 largely because these Early Jurassic structures may have been modified over time by diagenetic, thermal, and more recently, surficial weathering processes. More specifically, the differences van Eeden.31 While such an alteration effect of Karoo dolerites is in their structure and that of other more easily defined, younger commonly responsible for the formation of random concretions termite ichnofossils, may be attributed to the hydrothermal in the host sediments (which locally are rather abundant in the alteration effect associated with the intrusion of post-Clarens Clarens Formation, Fig. 4.), it is unlikely that such magmatic Karoo dolerites, an idea originally suggested by de Villiers30 and processes resulted in the consistent shape and orientation, the intricate burrows systems, the back-filled traces, etc. associated with the currently-described structures. Although it is difficult to quantify the altering effect of such post-depositional processes, it is possible that they, at least partly, may have cemented, consolidated and overprinted the original structures by blurring or even destroying some of the more delicate biogenic features, but enhancing the preservation of the overall forms.
Considering the difficulty of establishing correlates with extant or fossil nest structures of termites, alternative explanations of these structures were considered on numerous occasions. For example, the idea that at least part of the structures represent pedotubes of roots or tree trunks,2 or megarhizoliths, like those in Pleistocene aeolian deposits of Spain (for a detailed description see Alonso-Zarza et al.32) has been investigated. To this effect, we concluded that our structures (especially parts of Groups 1 and 2) may superficially appear to converge on the morphologies of such plant-associated structures. Unlike ours, however, those structures do not contain intricate burrows systems that are restricted to the pillar-like structures and associated with back-filled traces, but are rather stratiform and much more widely spaced (as expected of boxwork of second-order rhizoliths). In addition, our structures show no remnant bedding or any microfabrics that characterise the megarhizoliths described by Alonso-Zarza et al.32
Taking the above into account, we propose that the trace makers of the Clarens trace fossils are either some unknown Jurassic social organisms or Jurassic precursors of modern termites (for example, an Early Jurassic ancestor later leading to both Isoptera and their closest relative, the cockroach family (Cryptocercidae)). It needs to be emphasised that, in spite of current termite phylogeny indicating that the nascent stages of termite evolution were only in the Late Jurassic (see Grimaldi and Engel22 for review), the large time gap (>50 Mya) in the fossil record between the earliest-known body fossils of Isoptera (from the Early Cretaceous Berriasian of Russia33) and the Late Triassic ichnofossils interpreted as the first putative termite trace fossils,4,16 in our view, does not preclude the possible existence of termite-like organisms (or termite ancestors) in the Early Jurassic.34 However, to date, apart from the overall architectural and size similarities between these trace fossils and modern termite nests, there is no other strong support for the true termite origin of the structures, and thus their attribution to termites is tentative at this stage.
If accepting that the resemblance of the Clarens ichnofossils to modern-day termite nests is sufficient evidence to tentatively identify them as possible fossil termite nests, their interpretation as components of concentrated nests conforms to the nomenclature based on Noirot35 and Roonwal36 and summarised by Hasiotis.6 The nomenclature was easily applicable to ichnofossils belonging to Groups 2 and 5, but proved difficult for Groups 1, 3 and 4 (Table 1). The practicality of the above terminology is illustrated by the various Group 5 ichnofossils, where the ovoid chamber subdivided by shelves into a series of compartments can be interpreted as a hive within a nest, or endoecie. The irregular caverns can be interpreted as chambers for royals, and the interconnected passages as galleries of the periecie. Due to the high diversity of ichnofossil forms, however, the abovementioned terminology was not always applicable. Its rigid application would have resulted in the overlooking of previously undescribed nest forms or associations. For example, even though the enigmatic features of Group 4 do not resemble any known termite architecture, because the architectural forms in this Group 4 occur in a strict association defining a complex structure (e.g. occasional free-standing pillars floored by bioturbated horizontal surfaces), they can be interpreted as parts of a unique fossil termite nest with infrequently preserved epigeous nests and better preserved subterranean sections. Additionally, bioturbated masses characteristic of Group 17 may be identified as either the endoecie or periecie (based on the terminology of Hasiotis6), but the network of larger, anastomosed burrows associated with Group 1 architecture remains unnamed at this stage.
It is unlikely that the five structural groups from the Early Jurassic of the main Karoo Basin have an inorganic origin, given their structural variety, complexity, occasional systematic orientation, association with back-filled and massive trace fossils,22 and repeatability in different areas of southern Africa. In addition, large structures such as these cannot be built by solitary insects. Based on arguments presented here, it seems reasonable to conclude that the Clarens Formation ichnofossils may be products of a soil-dwelling, social organism that behaved similarly to modern termites (e.g. obtained their building matter by mixing their excrement with sediment particles), and left behind bioturbation features that closely resemble, in many aspects, modern termite nests.
To date, the documented trace fossil assemblages from the Early Jurassic geological record of southern Africa preserves both simple3,22 and a range of elaborate forms7,26 that have been assigned to social organisms either of unknown origin or related to termites. Worldwide, certain Mesozoic ichnofossils have been attributed to termite activity;46,8,15,3739 however, some researchers1 question any pre-Cretaceous termite ichnofossil interpretations. Indisputably, in our case, literal correspondence with any modern termite nest type is lacking, and not all architectural forms can be fully interpreted in terms of modern termite behaviour (especially Group 4). Reasons for this may be that these Early Jurassic ichnofossils were altered in the past 200 million years, or that they represent a nest structure not reproduced by modern termites. In other words, they may represent activities of a precursor of termites or an extinct social organism that was not related to termites, but behaved similarly to them. The latter is unlikely because no evidence to date suggests that anything other than termites could have been responsible for similar structures.
However, the five architecturally-complex structures are not necessarily explicable as constructed nests of five different Early Jurassic social organism groups, since, even in the extant record: (1) convergent or parallel evolution of different termites produces similar nests; (2) nests built by the same species vary in appearance 6,28,40); and due to environmental conditions (e.g., Macrotermes sp. (3) morphologies of an ontogenetic nest series may appear very different.6
Sedimentological and palaeontological evidence, associated with the diverse trace fossil forms in the uppermost part of the Clarens Formation, suggests the Early Jurassic of southern Africa was not excessively and uniformly arid.19,25,41 In particular, dinosaur footprints and plant fossils (e.g. petrified gymnosperm wood with well-developed growth rings24,25), as well as ephemeral stream deposits located along the same stratigraphic horizon in the main Karoo Basin, demonstrate the onset of a more humid phase in the Early Jurassic climatic history of southern Africa. Evidence of a relatively wet climatic condition is also abundant from the sedimentary interbeds from the lower part of the overlying, predominantly volcanic Drakensberg Group. These include carbonised plant matter, arthropod and insect fossils, fossil footprints and ephemeral fluvial-lacustrine deposits25,42 (also own observations). In addition, up to 30-m thick pillow basalt and interbedded sub-aqueous sedimentary strata in the lower part of the Drakensberg Group43 (also J. Marsh, pers. comm. and own observations) also indicate a relatively humid climate with fairly common standing bodies of water in this part of Pangaea in the Early Jurassic. The abovementioned evidence of wet period(s) depicts an environment that was not overly harsh and could have provided sufficient organic matter needed by the widely-distributed colonies of these social organisms during Clarens times. In view of the relative abundance of gymnosperm wood in the Clarens Formation, as well as a recent report indicating that mutualistic associations between termites and protozoa are at least 100 million years old,44 it is possible that the trace makers were wood feeders and their digestion might have been assisted by microbial symbionts (e.g. protozoa or bacteria).
Considering that in extant species of termites (e.g. Macrotermes bellicosus), nest architecture is dependent on ambient tempera-ture,40 the architectural style of these supposed nests could hint at local microclimate conditions. According to Korb,40 thin-walled epigeous nests with ornate, ridged outer walls are constructed in warm temperatures where the need for insulation is low. Thin walls also allow for enhanced gas exchange, minimising CO2 levels within the nest, and optimising fungus-growing conditions. In areas where ambient temperature is lower (perhaps due to canopy shading or more vegetated interdune areas), epigeous nest walls are thicker for insulation, at the expense of reduced gas exchange.40 Differences in wall thickness between the coeval Groups 2 (thick walled) and 3 (thin walled), may therefore signal differences in local ambient palaeotemperatures due to variable microclimatic conditions. Structural architecture, and thus, nest constructional behaviour of the Jurassic social organism, may then reflect environmental variation, rather than species behaviour alone. Alternatively, these structures, especially where they co-occur (as in Sites 1 and 4), may represent age differences of colonies of the same species (i.e. an ontogentic series), however their relative ages are difficult to assess due to lack of stratigraphic markers and other age indicators; apparent co-occurrence may be a product of modern weathering.
The discovery of these diverse terrestrial ichnofossils in Gondwana and southern Pangea, and their attribution to social insects, probably termites, is highly relevant to the ongoing debate1,34 about the timing of the origin of termites as well as social behaviour in insects. The interpretation that termites might have been the possible makers of these intricate structures further corroborates the idea that sociality in insects evolved prior to the breakup of Pangea, in the early Mesozoic.913,18 Thus, these current southern African findings, as well as the previously reported early to mid-Mesozoic termite ichnofossil occurrences,4,6,7 challenge some current thinking about the late Mesozoic evolutionary origin of termites as social insects. Moreover, the discovery of Fruitafossor windscheffelia, a fossil mammal from the Late Jurassic,45 which had specialised dentition for a termite-and other insect-based diet, corroborates the presence of pre-Cretaceous termites.
The great variability of the ichnofossils presented here mirrors the behavioural complexity of their builders, and potentially suggests that highly social superorganisms, consisting of a multitude of individuals with coordinated activities, were present in the Early Jurassic of southern Gondwana. Considering that ichnofossils of termite nests have already been hypothesised from the Late Triassic and Late Jurassic4,6,8 of North America, these recent findings reveal that termites may have been wide ranging in Pangea in the early and mid-Mesozoic and thus could have evolved and radiated before the breakup of the supercontinent.
E.M.B. was a South African National Research Fund postdoctoral fellow during the initial stages of manuscript preparation, and has received JRC funds from Rhodes University. O.C. thanks the University of Alberta and Natural Sciences and Engineering Research Council of Canada for research support. A.J.B. and P.G.E. thank the University of Pretoria for research funding. Thanks to D. Ambrose, B.S. Rubidge, P. Jacklyn, V. Uys, S. Masters, M. Diop, the ladies of the Nhlapo family and L. Ntsaba for their scientific or field support. We also thank two anonymous reviewers and A. Morris for constructive review of the manuscript. The investigation complies with the current laws of the countries in which it was performed.
1. Genise J.F. (2004). Ichnotaxonomy and ichnostratigraphy of chambered trace fossils in palaeosols attributed to coleopterans, termites and ants. Special Publications of the Geological Society of London 228, 419453. [ Links ]
2. Duringer P., Schuster M., Genise J.F., Mackaye H.T., Vignaud P. and Brunet M. (2007). New termite trace fossils: Galleries, nests and fungus combs from the Chad basin of Africa (Upper MioceneLower Pliocene). Palaeogeogr. Palaeoclimatol. Palaeoecol. 251, 323353. [ Links ]
3. Smith R.M.H. and Kitching J. (1997). Sedimentology and vertebrate taphonomy of the Tritylodon Acme Zone: a reworked palaeosol in the Lower Jurassic Elliot Formation, Karoo Supergroup, South Africa. Palaeogeogr. Palaeoclimatol. Palaeoecol. 131, 2950. [ Links ]
4. Hasiotis S.T. and Dubiel R.F. (1995). Termite (Insecta: Isoptera) nest ichnofossils from the Triassic Chinle Formation, Petrified Forest National Park, Arizona. Ichnos 4, 119130. [ Links ]
5. Hasiotis S.T. and Demko T.M. (1996). Terrestrial and freshwater trace fossils, Upper Jurassic Morrison Formation, Colorado Plateau. Continental Jurassic Symposium. Mus. N. Arizona Bull. 60, 355370. [ Links ]
6. Hasiotis S.T. (2003). Complex ichnofossils of solitary and social soil organisms: understanding their evolution and roles in terrestrial paleoecosystems. Palaeogeogr. Palaeoclimatol. Palaeoecol. 192, 259320. [ Links ]
7. Bordy E.M., Bumby A., Catuneanu O. and Eriksson P.G. (2004). Advanced Early Jurassic termite (Insecta: Isoptera) nests: evidence from the Clarens Formation in the Tuli Basin, southern Africa. Palaios 19, 6878. [ Links ]
8. Roberts E.M. and Tapanila L. (2006). A new social insect nest from the Upper Cretaceous Kaiparowits Formation of southern Utah. J. Paleontol. 80, 768774. [ Links ]
9. Emerson A.E. (1955). Geographic origins and dispersions of termite genera. Fieldiana Zool. 37, 465521. [ Links ]
10. Bouillion A. (1970). Termites of the Ethiopian region. In Biology of Termites, vol. 2, eds K. Krishna and F.M.Weesner, pp. 154279. Academic Press, New York. [ Links ]
11. Emerson A.E. and Krishna K. (1975). The termite family Serritermitidae (Isoptera). Am. Mus. Novit. 2570, 131. [ Links ]
12. Carpenter F.M. and Burnham L. (1985). The geological record of insects. Annu. Rev. Earth Planet. Sci. 13, 297314. [ Links ]
13. Labandeira C.C. and Sepkoski J.J. Jr. (1993). Insect diversity in the fossil record. Science 261, 310315. [ Links ]
14. Hasiotis S.T. (1998). Continental trace fossils as the key to understand Jurassic terrestrial and freshwater ecosystems. Modern Geol. 22, 451459. [ Links ]
15. Hasiotis S.T. (2002). Continental Trace Fossils. Short Course Notes, Number 51, SEPM, Tulsa. [ Links ]
16. Hasiotis S.T. (2004). Reconnaissance of Upper Jurassic Morrison Formation ichnofossils, Rocky Mountain region, USA: Environmental, stratigraphic, and climatic significance of terrestrial and freshwater ichnocoenoses. Sediment. Geol. 167, 277368. [ Links ]
17. Eggleton P. (2000). Global patterns of termite diversity. In Termites: Evolution, Sociality, Symbioses, Ecology, eds T. Abe, D. Bignell and M. Higashi, pp. 2551. Kluwer Academic Publishers, Dordrecht. [ Links ]
18. Strassmann J.E. and Queller D.C. (2007). Insect societies as divided organisms: the complexities of purpose and cross-purpose. Proc. Natl. Acad. Sci. USA 104, 86198626. [ Links ]
19. Beukes N.J. (1970). Stratigraphy and sedimentology of the Cave Sandstone Stage, Karoo System. In Proceedings 2nd IUGS Symposium on Gondwana Stratigraphy and Palaeontology, ed. S.H. Haughton, pp. 321341. CSIR, Pretoria. [ Links ]
20. Eriksson P.G. (1986). Aeolian dune and alluvial fan deposits in the Clarens Formation of the Natal Drakensberg. Trans. Geol. Soc. S. Afr. 80, 389393. [ Links ]
21. Duncan R.A., Hooper P.R., Rehacek J., Marsh J.S. and Duncan A.R. (1997). The timing and duration of the Karoo igneous event, southern Gondwana. J. Geophys. Res. 102, 1812718138. [ Links ]
22. Bordy E.M. (2008). Enigmatic trace fossils from the Lower Jurassic Clarens Formation, southern Africa. Palaeontol. Electronica 11/3; 16A: 16p. Online at: http://palaeo-electronica.org/2008_3/150/index.html [ Links ]
23. Grimaldi D.A. and Engel M.S. (2005). Evolution of the Insects. Cambridge University Press, New York. [ Links ]
24. Meijs L. (1960). Notes on the occurrence of petrified wood in Basutoland. Papers No. 2, Pius XII University College, Roma (Basutoland). [ Links ]
25. Ellenberger P. (1970). Les niveaux paléontologiques de première apparition des mammifères primordiaux en Afrique du Sud et leur ichnologie: établissement de zones stratigraphiques détaillées dans le Stromberg du Lesotho (Afrique du Sud) (Trias supérieur à Jurassique). In Proceedings 2nd IUGS Symposium on Gondwana Stratigraphy and Palaeontology, ed. S.H. Haughton, pp. 343370. CSIR, Pretoria. [ Links ]
26. Bordy E.M. and Catuneanu O. (2002). Sedimentology and palaeontology of upper Karoo aeolian strata (Early Jurassic) in the Tuli Basin, South Africa. J. Afr. Earth Sci. 35, 301314. [ Links ]
27. Noirot C. and Darlington J. (2000). Termite nests: architecture, regulation and defence. In Termites: Evolution, Sociality, Symbioses, Ecology, eds T. Abe, D. Bignell and M. Higashi, pp. 121139. Kluwer Academic Publishers, Dordrecht. [ Links ]
28. Uys V. (2002). A guide to the termite genera of southern Africa. In Plant Protection Research Institute Handbook No. 15. Plant Protection Research Institute, Agricultural Research Council, Pretoria. [ Links ]
29. Schuster M., Duringer P., Nel A., Brunet M., Vignaud P. and Mackaye H.T. (2000). Découverte de termitières fossiles dans les sites à Vertébrés du Pliocène tchadien: description, identification et implications paléoécologiques. Discovery of Pliocene fossilised termitaries in Chadian vertebate levels: description, identification and palaeoecological implications. Comptes Rendus de l'Académie des Sciences Series IIA Earth Planet. Sci. 331, 1520. [ Links ]
30. De Villiers S.B. (1967). Sundervormige structure in Holkranssandsteen van die Serie Stormberg, Distrik Messina. Ann. Geol. Surv. S. Afr. 6, 6971. [ Links ]
31. Van Eeden O.R. (1968). Die ontstaan van silindervormige structure in die Holkrans-sandsteen. Ann. Geol. Surv. S. Afr. 7, 81. [ Links ]
32. Alonso-Zarza A.M., Genise J.F., Cabrera M.C., Mangas J., Martín-Pérez A., Valdeolmillos A.Y. and Dorado-Valiño M. (2008). Megarhizoliths in Pleistocene aeolian deposits from Gran Canaria (Spain): ichnological and palaeoenvironmental significance. Palaeogeogr. Palaeoclimatol. Palaeoecol. 265, 3951. [ Links ]
33. Engel M.S., Grimaldi D.A. and Krishna K. (2007). Primitive termites from the Early Cretaceous of Asia (Isoptera). Stuttgarter Beitr. Naturkunde Serie B 371, 132. [ Links ]
34. Bordy E.M., Bumby A., Catuneanu O. and Eriksson P.G. (2005). Reply to a comment on advanced Early Jurassic termite (Insecta: Isoptera) nests: evidence from the Clarens Formation in the Tuli Basin, southern Africa. (Bordy et al., 2004). Palaios 20, 307311. [ Links ]
35. Noirot C. (1970). The nests of termites. In Biology of Termites, vol. 2, eds K. Krishna and F.M. Weesner, pp. 73125. Academic Press, New York. [ Links ]
36. Roonwal M.L. (1970). Termites of the Oriental region. In Biology of Termites, vol. 2, eds K. Krishna and F.M. Weesner, pp. 315391. Academic Press, New York. [ Links ]
37. Rohr D.M., Boucot A.J., Miller J. and Abbott M. (1986). Oldest termite nest from the Upper Cretaceous of West Texas. Geology 14, 8788. [ Links ]
38. Hasiotis S.T. (2000). The invertebrate invasion and evolution of Mesozoic soil ecosystems: the ichnofossil record of ecological innovations. In Phanerozoic Terrestrial Ecosystems: Paleontological Society Short Course 6, eds R. Gastaldo and W. Dimichele, pp. 141169. Yale University Reprographics and Imaging Services, New Haven, Connecticut. [ Links ]
39. Francis J.E. and Harland B.M. (2006). Termite borings in Early Cretaceous fossil wood, Isle of Wight, UK. Cretaceous Res. 27, 773777. [ Links ]
40. Korb J. (2003). Thermoregulation and ventilation of termite mounds. Naturwissenschaften 90, 212219. [ Links ]
41. Eriksson P.G., McCourt S. and Snyman C.P. (1994). A note on the petrography of upper Karoo sandstones in the Natal Drakensberg: implications for the Clarens Formation palaeoenvironment. Trans. Geol. Soc. S. Afr. 97, 101105. [ Links ]
42. Haughton S.H. (1924). The fauna and stratigraphy of the Stormberg Series. Ann. S. Afr. Mus. 12, 323497. [ Links ]
43. McCarthy M.J. (1970). An occurrence of pillow lava in a basal flow of the Drakensberg Volcanic Stage. In Proceedings 2nd IUGS Symposium on Gondwana Stratigraphy and Palaeontology, ed. S.H. Haughton, pp. 433435. CSIR, Pretoria. [ Links ]
44. Poinar G.O. Jr. (2009). Description of an early Cretaceous termite (Isoptera: Kalotermitidae) and its associated intestinal protozoa, with comments on their co-evolution. Parasit. Vectors 2: 12. doi:10.1186/1756-3305-2-12 [ Links ]
45. Luo Z-X. and Wible J.R. (2005). Late Jurassic digging mammal and early mammalian diversification. Science 308, 103107. [ Links ]
Received 30 January. Accepted 10 July 2009. | <urn:uuid:bc63d4d5-857c-4816-8955-bdcb83da741f> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0038-23532009000500012&lng=es&nrm=iso",
"date": "2016-09-26T23:06:21",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.878566324710846,
"token_count": 8451,
"score": 2.53125,
"int_score": 3
} |
versão On-line ISSN 1996-7489
versão impressa ISSN 0038-2353
S. Afr. j. sci. vol.111 no.3-4 Pretoria Mar./Abr. 2015
John Butler-AdamI, II
IOffice of the Vice Principal: Research and Graduate Education, University of Pretoria, Pretoria, South Africa
IIAcademy of Science of South Africa, Pretoria, South Africa
BOOK TITLE: Engraved landscape Biesje Poort: Many voices
EDITORS: Mary E. Lange, Liana Müller Jansen, Roger C. Fisher Keyan G. Tomaselli and David Morris
PUBLISHER: Tormentoso, Gordon's Bay, South Africa; ZAR360
The language of landscape is our native language. Landscape was the original dwelling; humans evolved among plants and animals, under the sky, upon the earth, near water. Everyone carries that legacy in body and mind. Humans touched, saw, heard, smelled, tasted, lived in and shaped landscapes before the species had words to describe what it did. Landscapes were the first human texts, read before the invention of other signs and symbols. Clouds, wind, and sun were clues to weather; ripples and eddies signs of rocks and life under water; caves and ledges promise of shelter; leaves guides to food; birdcalls warnings of predators.1
We have to read the scene. Understand the message that it contains, and who it was meant for.2
The relationships among people, their wide natural surroundings, their 'places' of greatest comfort and the immediate landscapes that encompass those places are older than the emergence of Homo sapiens sapiens, while maps of local areas, drawn on cave walls, exist from as far back as 18 500 years BP More recently, in one of the earliest Western texts readily available to us, Homer3 writes of Hermes visit to Calypso:
Around the cave grew a thick copse of alder, poplar and fragrant cypress, where large birds nested, owls, and falcons, and long-necked cormorants whose business is with the sea. And heavy with clustered grapes a mature cultivated vine went trailing across the hollow entrance. And four neighbouring springs, channelled this way and that, flowed with crystal water, and all around in soft meadows iris and wild celery flourished. Even an immortal passing by might pause and marvel, delighted in spirit...
The sense of a place and location, with meaning to the observer, emerges very clearly in this and many other texts of the time and has done ever since. But how best to conceptualise and come to terms with our human encounters with, and our use and interpretation of landscapes, has been of theoretical and analytical interest to geographers, biologists, political scientists, landscape architects and literary critics, amongst others, for decades. We live in landscapes, shape them and then leave them behind as texts, often as complicated as palimpsests - worked, shaped, 'written' and then 'rewritten'. The message left behind is sometimes interpretable, yet still present for latecomers to 'read' as a way of understanding what has gone before and, possibly, what it might have meant to those former dwellers.
The above is the substance of Engraved Landscape Biesje Poort: Many Voices - a book oddly dedicated 'to all silent voices of times past and ever present at Biesje Poort' and presumably not, therefore, to the many voices that the text aims to reveal to the reader. This contradiction is, however, just one of many unusual twists to the book, as it is a montage (in this case, a textual collection of both images and analyses) of personal observations, poems, translated poetry and serious scholarly chapters. The chapters cover the project methodology, and also history, rock art, archaeology, conservation, and the nature of indigeneity as they pertain to the landscape of Biesje Poort. An appraisal of the list of references quickly attests to the serious attention that has been afforded the scholarly contributions.
Some readers might find the poetry a little distracting - while the lead chapter, largely a personal physical journey into the southern Kalahari world of Biesje Poort, offers a rather unexpected entrée into what is a serious engagement with the significance of the landscape and the analyses and insights in the chapters that constitute the remainder of the book. The 'conversations', however, deserve careful attention as they are an integral element in the chain of meanings to be disinterred ('engaging the absence of storyline' in the words of Chapter 3) through the scholarly work of the authors.
Some previous reviewers of the book have referred, in various ways, to the 'absences' and 'silences' in the text (what, for example, might be said about the people who left few or no traces behind?) reminiscent of the now rather discredited school of post-modernism, but I believe that that there is much more of value to be offered by the authors. In fact, apart from the new discoveries, information, insights and imaginings presented, one of the most valuable collective contributions that the book offers in the field of landscape analysis is that it is one of very few recent texts that speaks directly to the interpretation and meaning of the messages that people leave behind as additions to, and statements about, their places. As long ago as 1993, Susanne Küchler4 wrote that landscape is 'the most generally accessible and widely shared aide-mémoire of a culture's knowledge and understanding of its past and future' and that the 'conception of landscape as inscribed surface implies a link between mapping and image-making....'. And it is these two ideas that are consistent throughout the book, making it, what I believe to be, amongst the first and, possibly, most comprehensive studies of the many voices that speak to us from the landscapes of South Africa. For it is not just the rock art but also the 'Western' and 'indigenous' mapping of the Biesje Poort landscape and its meanings that receive careful attention.
The scope of the book, coupled with its thorough scholarship, make it a perfect 'multidisciplinary' text, which will make fascinating reading for readers of the South African Journal of Science from across a wide range of research areas
1. Spirn AW. The language of landscape. New Haven: Yale University Press; 1988. p. 15. [ Links ]
2. Carrisi D. The whisperer. New York: Little Brown; 2009. p. 313. [ Links ]
3. Homer. The odyssey: Book V 43-91 [online]. No date [cited 2015 Feb 15]. Available from: http://www.poetryintranslation.com/PITBR/Greek/Odyssey5.htm [ Links ]
4. Küchler S. Landscape as memory. In: Bender B, editor. Landscape - Politics and perspectives. Oxford: Berg Publishers; 1993. p. 85. [ Links ]
ASSAf, PO Box 72135
Lynnwood Ridge 0040 | <urn:uuid:2e8a708e-5123-4c88-a9d6-e9463d8b0603> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0038-23532015000200004&lng=pt&nrm=iso",
"date": "2016-09-26T22:35:39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660898.27/warc/CC-MAIN-20160924173740-00044-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9330813884735107,
"token_count": 1490,
"score": 2.78125,
"int_score": 3
} |