Wednesday 21 October 2015

KIDNEY Stone,Effects and Cure

                    KIDNEY Stone,Effects and Cure
  • A kidney stone is a hard, crystalline mineral material formed within the kidney or urinary tract.
  • Nephrolithiasis is the medical term for kidney stones.
  • One in every 20 people develop kidney stones at some point in their life.
  • Kidney stones form when there is a decrease in urine volume and/or an excess of stone-forming substances in the urine.
  • Dehydration is a major risk factor for kidney stone formation.
  • Symptoms of a kidney stone include flankpain (the pain can be quite severe) and blood in the urine (hematuria).
  • People with certain medical conditions, such as gout, and those who take certain medications or supplements are at risk for kidney stones.
  • Diet and hereditary factors are also related to stone formation.
  • Diagnosis of kidney stones is best accomplished using anultrasound, IVP, or a CT scan.
  • Most kidney stones will pass through the ureter to the bladder on their own with time.
  • Treatment includes pain-control medications and, in some cases, medications to facilitate the passage of urine.
  • If needed, lithotripsy or surgical techniques may be used for stones which do not pass through the ureter to the bladder on their own.
A kidney stone is a hard, crystalline mineral material formed within the kidney or urinary tract. Kidney stones are a common cause of blood in the urine (hematuria) and often severe pain in the abdomen, flank, or groin. Kidney stones are sometimes called renal calculi.
The condition of having kidney stones is termed nephrolithiasis. Having stones at any location in the urinary tract is referred to as urolithiasis, and the term ureterolithiasis is used to refer to stones located in the ureters.
Anyone may develop a kidney stone, but people with certain diseases and conditions (see below) or those who are taking certain medications are more susceptible to their development. Urinary tract stones are more common in men than in women. Most urinary stones develop in people 20-49 years of age, and those who are prone to multiple attacks of kidney stones usually develop their first stones during the second or third decade of life. People who have already had more than one kidney stone are prone to developing further stones.
In residents of industrialized countries, kidney stones are more common than stones in the bladder. The opposite is true for residents of developing areas of the world, where bladder stones are the most common. This difference is believed to be related to dietary factors. People who live in the southern or southwestern regions of the U.S. have a higher rate of kidney stone formation than those living in other areas. Over the last few decades, the percentage of people with kidney stones in the U.S. has been increasing, most likely related to the obesity epidemic.
A family history of kidney stones is also a risk factor for developing kidney stones. Kidney stones are more common in Asians and Caucasians than in Native Americans, Africans, or African Americans.
Uric acid kidney stones are more common in people with chronically elevated uric acid levels in their blood (hyperuricemia).
A small number of pregnant women develop kidney stones, and there is some evidence that pregnancy-related changes may increase the risk of stone formation. Factors that may contribute to stone formation during pregnancy include a slowing of the passage of urine due to increasedprogesterone levels and diminished fluid intake due to a decreasing bladder capacity from the enlarging uterus. Healthy pregnant women also have a mild increase in their urinary calcium excretion. However, it remains unclear whether the changes of pregnancy are directly responsible for kidney stone formation or if these women have another underlying factor that predisposes them to kidney stone formation.
Kidney stones form when there is a decrease in urine volume and/or an excess of stone-forming substances in the urine. The most common type of kidney stone contains calcium in combination with either oxalate or phosphate. A majority of kidney stones are calcium stones. Other chemical compounds that can form stones in the urinary tract include uric acid, magnesium ammonium phosphate (which forms struvite stones; see below), and the amino acidcysteine.
Dehydration from reduced fluid intake or strenuous exercise without adequate fluid replacement increases the risk of kidney stones. Obstruction to the flow of urine can also lead to stone formation. In this regard, climate may be a risk factor for kidney stone development, since residents of hot and dry areas are more likely to become dehydrated and susceptible to stone formation.
Kidney stones can also result from infection in the urinary tract; these are known as struvite or infection stones. Metabolic abnormalities, including inherited disorders of metabolism, can alter the composition of the urine and increase an individual's risk of stone formation.
A number of different medical conditions can lead to an increased risk for developing kidney stones:
  • Gout results in chronically increased amount of uric acid in the blood and urine and can lead to the formation of uric acid stones.
  • Hypercalciuria (high calcium in the urine), another inherited condition, causes stones in more than half of cases. In this condition, too much calcium is absorbed from food and excreted into the urine, where it may form calcium phosphate or calcium oxalate stones.
  • Other conditions associated with an increased risk of kidney stones includehyperparathyroidism, kidney diseases such as renal tubular acidosis, and other inherited metabolic conditions, including cystinuria andhyperoxaluria.
  • Chronic diseases such as diabetes andhigh blood pressure (hypertension) are also associated with an increased risk of developing kidney stones.
  • People with inflammatory bowel disease are also more likely to develop kidney stones.
  • Those who have undergone intestinal bypass or ostomy surgeryare also at increased risk for kidney stones.
  • Some medications also raise the risk of kidney stones. These medications include some diuretics, calcium-containing antacids, and the protease inhibitor indinavir (Crixivan), a drug used to treat HIVinfection.
  • Dietary factors and practices may increase the risk of stone formation in susceptible individuals. In particular, inadequate fluid intake predisposes to dehydration, which is a major risk factor for stone formation. Other dietary practices that may increase an individual's risk of forming kidney stones include a high intake of animal protein, a high-salt diet, excessive sugar consumption, excessive vitamin D supplementation, and excessive intake of oxalate-containing foods such as spinach. Interestingly, low levels of dietary calcium intake may alter the calcium-oxalate balance and result in the increased excretion of oxalate and a propensity to form oxalate stones.
  • Hyperoxaluria as an inherited condition is uncommon and is known as primary hyperoxaluria. The elevated levels of oxalate in the urine increase the risk of stone formation. Primary hyperoxaluria is much less common than hyperoxaluria due to dietary factors as mentioned above.
While some kidney stones may not produce symptoms (known as "silent" stones), people who have kidney stones often report the sudden onset of excruciating, cramping pain in their low back and/or side, groin, or abdomen. Changes in body position do not relieve this pain. The abdominal, groin, and/or back pain typically waxes and wanes in severity, characteristic of colicky pain (the pain is sometimes referred to as renal colic). It may be so severe that it is often accompanied by nausea and vomiting. The pain has been described by many as the worst pain of their lives, even worse than the pain of childbirth or broken bones. Kidney stones also characteristically cause bloody urine. If infection is present in the urinary tract along with the stones, there may be fever and chills. Sometimes, symptoms such as difficulty urinating, urinary urgency, penile pain, or testicular pain may occur due to kidney stones.

How are kidney stones diagnosed?

The diagnosis of kidney stones is suspected when the typical pattern of symptoms is noted and when other possible causes of the abdominal or flank pain are excluded. Which is the ideal test to diagnose kidney stones is controversial. Imaging tests are usually done to confirm the diagnosis. Many patients who go to the emergency room will have a non-contrast CT scan done. This can be done rapidly and will help rule out other causes for flank or abdominal pain. However, a CT scan exposes patients to significant radiation, and recently, ultrasound in combination with plain abdominal X-rays have been shown to be effective in diagnosing kidney stones.
In pregnant women or those who should avoid radiation exposure, an ultrasound examination may be done to help establish the diagnosis.
Most kidney stones eventually pass through the urinary tract on their own within 48 hours, with ample fluid intake. Ketorolac (Toradol), an injectable anti-inflammatory drug, and narcotics may be used for pain control when over-the-counter pain control medications are not effective. Toradol, aspirin, and NSAIDs must be avoided if lithotripsy is to be done because of the increased risk of bleeding. Intravenous pain medications can be given when nausea andvomiting are present.
Although there are no proven home remedies to dissolve kidney stones, home treatment may be considered for patients who have a known history of kidney stones. Since most kidney stones, given time, will pass through the ureter to the bladder on their own, treatment is directed toward control of symptoms. Home care in this case includes the consumption of plenty of fluids. Acetaminophen (Tylenol) may be used as pain medication if there is no contraindication to its use. If further pain medication is needed, stronger narcotic pain medications may be recommended.
There are several factors which influence the ability to pass a stone. These include the size of the person, prior stone passage, prostate enlargement, pregnancy, and the size of the stone. A 4 mm stone has an 80% chance of passage while a 5 mm stone has a 20% chance. Stones larger than 9 mm-10 mm rarely pass without specific treatment.
Some medications have been used to increase the passage rates of kidney stones. These include calcium channel blockers such asnifedipine (Adalat, Procardia, Afeditab, Nifediac) and alpha blockers such as tamsulosin (Flomax). These drugs may be prescribed to some people who have stones that do not rapidly pass through the urinary tract.
For kidney stones that do not pass on their own, a procedure called lithotripsy is often used. In this procedure, shock waves are used to break up a large stone into smaller pieces that can then pass through the urinary system.
Surgical techniques have also been developed to remove kidney stones when other treatment methods are not effective. This may be done through a small incision in the skin (percutaneous nephrolithotomy) or through an instrument known as an ureteroscope passed through the urethra and bladder up into the ureter.
Rather than having to undergo treatment, it is best to avoid kidney stones in the first place when possible. It can be especially helpful to drink more water, since low fluid intake and dehydration are major risk factors for kidney stone formation.
Depending on the cause of the kidney stones and an individual's medical history, changes in the diet or medications are sometimes recommended to decrease the likelihood of developing further kidney stones. If one has passed a stone, it can be particularly helpful to have it analyzed in a laboratory to determine the precise type of stone so specific preventionmeasures can be considered.
People who have a tendency to form calcium oxalate kidney stones may be advised to limit their consumption of foods high in oxalate, such as spinach, rhubarb, Swiss chard, beets, wheat germ, and peanuts. Also drinking lemon juice or lemonade may be helpful in preventing kidney stones.

What is the prognosis for kidney stones?

Most kidney stones will pass on their own, and successful treatments have been developed to remove larger stones or stones that do not pass. People who have had a kidney stone remain at risk for future stones throughout their lives.

Behavior

                            Behavior 


Behavior is action that alters the relationship between an organism and its environment.
Behavior may occur as a result of
  • an external stimulus (e.g., sight of a predator)
  • internal stimulus (e.g., hunger)
  • or, more often, a mixture of the two (e.g., mating behavior)
It is often useful to distinguish between
  • innate behavior = behavior determined by the "hard-wiring" of the nervous system. It is usually inflexible, a given stimulus triggering a given response. A salamander raised away from water until long after its siblings begin swimming successfully will swim every bit as well as they the very first time it is placed in the water. Clearly this rather elaborate response is "built in" in the species and not something that must be acquired by practice.
  • learned behavior = behavior that is more or less permanently altered as a result of the experience of the individual organism (e.g., learning to play baseball well).
Examples of innate behavior:
  • taxes
  • reflexes
  • instincts

Reflexes

The Withdrawal Reflex

When you touch a hot object, you quickly pull you hand away using the withdrawal reflex.These are the steps:
  • The stimulus is detected by receptors in the skin.
  • These initiate nerve impulses in sensory neurons leading from the receptors to the spinal cord.
  • The impulses travel into the spinal cord where the sensory nerve terminals synapse with interneurons.
    • Some of these synapse with motor neurons that travel out from the spinal cord entering mixed nerves that lead to the flexors that withdraw your hand.
    • Others synapse with inhibitory interneurons that suppress any motor output to extensors whose contraction would interfere with the withdrawal reflex.

Instincts

Instincts are complex behavior patterns which, like reflexes, are
  • inborn
  • rather inflexible
  • valuable at adapting the animal to its environment
They differ from reflexes in their complexity.The entire body participates in instinctive behavior, and an elaborate series of actions may be involved.
The scratching behavior of a dog and a European bullfinch, shown here, is part of their genetic heritage. The widespread behavior of scratching with a hind limb crossed over a forelimb in common to most birds, reptiles, and mammals. (Drawing courtesy of Rudolf Freund and Scientific American, 1958.)
So instincts are inherited just as the structure of tissues and organs is. Another example.
  • The African peach-faced lovebird carries nesting materials to the nesting site by tucking them in its feathers.
  • Its close relative, the Fischer's lovebird, uses its beak to transport nesting materials.
  • The two species can hybridize. When they do so, the offspring succeed only in carrying nesting material in their beaks. Nevertheless, they invariably go through the motions of trying to tuck the materials in their feathers first.

Interaction of Internal and External Stimuli

Instinctive behavior often depends on conditions in the internal environment.In many vertebrates courtship and mating behavior will not occur unless sex hormones (estrogens in females, androgens in males) are present in the blood.
The target organ is a small region of the hypothalamus. When stimulated by sex hormones in its blood supply, the hypothalamus initiates the activities leading to mating.
The level of sex hormones is, in turn, regulated by the activity of the anterior lobe of the pituitary gland.
The drawing outlines the interactions of external and internal stimuli that lead an animal, such as a rabbit, to see a sexual partner and mate with it.

Releasers of Instinctive Behavior

So once the body is prepared for certain types of instinctive behavior, an external stimulus may be needed to initiate the response. N. Tinbergen (who shared the 1973 Nobel Prize with Konrad Lorenz and Karl von Frisch) showed that the stimulus need not necessarily be appropriate to be effective.
  • During the breeding season, the female three-spined stickleback normally follows the red-bellied male (a in the figure) to the nest that he has prepared.
  • He guides her into the nest (b) and then
  • prods the base of her tail (c).
  • She then lays eggs in the nest.
  • After doing so, the male drives her from the nest, enters it himself, and fertilizes the eggs (d).
  • Although this is the normal pattern, the female will follow almost any small red object to the nest, and
  • once within the nest, neither the male nor any other red object need be present.
  • Any object touching her near the base of her tail will cause her to release her eggs.
It is as though she were primed internally for each item of behavior and needed only one specific signal to release the behavior pattern.
For this reason, signals that trigger instinctive acts are called releasers. Once a particular response is released, it usually runs to completion even though the stimulus has been removed. One or two prods at the base of her tail will release the entire sequence of muscular actions involved in liberating her eggs.
Chemical signals (e.g., pheromones) serve as important releasers for the social insects: ants, bees, and termites. Many of these animals emit several different pheromones which elicit, for example, alarm behavior, mating behavior, and foraging behavior in other members of their species.
The studies of Tinbergen and others have shown that animals can often be induced to respond to inappropriate releasers. For example, a male robin defending its territory will repeatedly attack a simple clump of red feathers instead of a stuffed robin that lacks the red breast of the males.
Although such behavior seems inappropriate to our eyes, it reveals a crucial feature of all animal behavior: animals respond selectively to certain aspects of the total sensory input they receive. Animals spend their lives bombarded by a myriad of sights, sounds, odors, etc. But their nervous system filters this mass of sensory data, and they respond only to those aspects that the evolutionary history of the species has proved to be significant for survival.

Saturday 17 October 2015

DNA as hereditary Material

                     DNA as hereditary Material
Griffith experiment:The evidences of hereditary nature of DNA was provided by British Microbiologist Fredrick Griffith.

  • Types of bacteria used by Griffith:
1:S-type bacteria:The normal pathogenic form of this bacterium is referred as S-type bacterium or normal or wild type because it form smooth colonies on culture dish and contain POLYSACcHaRIDE coat.
2:R-type bacteria:The mutant form which lack the enzyme needed for the manufacture of polysaccharide coat,it is called mutant/R-type because it form rough colonies.

Experiment number 1:
  • Griffith infected the mice with virulent strain S-type of streptococcus pneumoniae  bacteria,the mice died of blood poisoning.
  • However when he infected the similar mice with mutant strain R-type of Pneumococcus that lack the plysaccharide coat.
  • The mice show no illness
  • The coat was apparently necessary for virulence.
Experiment number 2:
  • To determine whether the plysaccharide coat had itself toxic substance or not?
  • Griffith injected dead bacteria of virulent form S strain into ice the mice remain perfectly healthy.
  • As a control he injected the mixture of dead S and live R or coatless bacteria.
  • Unexpectedly the mice developed the disease and died of blood poisoning.
Conclusion:The blood of died mice was found high level of live virulent strains of S-type bacteria that was impossible.
Somehow information shows that coat was genetically transferred from Virulent S to Coatless R bacteria,this activity was not possible that the bacteria contain coat were dead.then how it was trasferred?




Three scientists, Oswald Avery, Colin MacLeod, and Maclyn McCarty, managed to show that Frederick Griffith’s transforming factor was in fact DNA, i.e. DNA is the heritable substance.

At first, Avery refused to believe Griffith’s results that actually challenged his own research on pneumococcal capsules - how could a rough capsule be converted into a smooth one? However, he soon confirmed Griffith’s results and set about trying to purify this mysterious “transforming principle” - a substance that could cause a heritable change of bacterial cells.
They extracted from Streptococcus pneumoniae S (containing a capsule) bacteria purified DNA, proteins and other materials and mixed R bacteria (lacking a capsule) with these different materials, and only those mixed with DNA were transformed into S bacteria. Therefore, DNA is the “transforming factor” and not proteins or other materials.
Amazingly, not everyone was convinced by the experiments of Avery's, MacLeod's and McCarty's as it was still widely assumed that genetic information was carried in protein. Firstly, due to Levene's influential "tetranucleotide hypothesis", many condsidered DNA to be a “stupid molecule,” made up of a repeat of the four chemical bases without any variations. Secondly, few biologists thought that genetics could be applied to bacteria, since they lacked chromosomes and sexual reproduction. Although the experimental findings of their experiment were quickly and independently a number of scientists still considered that protein contaminants were responsible the results. It was not be until the experiments of Hershey and Chase (1952) that DNA was finally proved and accepted to be the genetic material.

Hershey and Chase experiment

Historical background


In the early twentieth century, biologists thought that proteins carried genetic information.This was based on the belief that proteins were more complex than DNA.Phoebus Levene's influential "tetranucleotide hypothesis", which incorrectly proposed that DNA was a repeating set of identical nucleotides, supported this conclusion.The results of theAvery–MacLeod–McCarty experiment, published in 1944, suggested that DNA was the genetic material, but there was still some hesitation within the general scientific community to accept this, which set the stage for the Hershey–Chase experiment.
Hershey and Chase, along with others who had done related experiments, confirmed that DNA was the biomolecule that carried genetic information. Before that, Oswald Avery,Colin MacLeod, and Maclyn McCarty had shown that DNA led to the transformation of one strain of Streptococcus pneumoniae to another that was more virulent. The results of these experiments provided evidence that DNA was the biomolecule that carried genetic information.

Methods and results

Structural overview of T2 phage
Hershey and Chase needed to be able to examine different parts of the phages they were studying separately, so they needed to isolate the phage subsections. Viruses were known to be composed of a protein shell and DNA, so they chose to uniquely label each with a different elemental isotope. This allowed each to be observed and analyzed separately. Since phosphorus is contained in DNA but not amino acids, radioactive phosphorus-32 was used to label the DNA contained in the T2 phage. Radioactive sulfur-35 was used to label the protein sections of the T2 phage, because sulfur is contained in amino acids but not DNA.
Hershey and Chase inserted the radioactive elements into the bacteriophages by adding the isotopes to separate media within which bacteria were allowed to grow for 4 hours before bacteriophage introduction. When the bacteriophages infected the bacteria, the progenycontained the radioactive isotopes in their structures. This procedure was performed once for the sulfur-labeled phages and once for phosphorus-labeled phages.The labeled progeny were then allowed to infect unlabeled bacteria. The phage coats remained on the outside of the bacteria, while genetic material entered. Centrifugation allowed for the separation of the phage coats from the bacteria. These bacteria were lysed to release phage progeny. The progeny of the phages that were originally labeled with 32P remained labeled, while the progeny of the phages originally labeled with 35S were unlabeled. Thus, the Hershey–Chase experiment helped confirm that DNA, not protein, is the genetic material.
Hershey and Chase showed that the introduction of deoxyribonuclease (referred to as DNase), an enzyme that breaks down DNA, into a solution containing the labeled bacteriophages did not introduce any 32P into the solution. This demonstrated that the phage is resistant to the enzyme while intact. Additionally, they were able to plasmolyze the bacteriophages so that they went into osmotic shock, which effectively created a solution containing most of the 32P and a heavier solution containing structures called “ghosts” that contained the 35S and the protein coat of the virus. It was found that these “ghosts” could adsorb to bacteria that were susceptible to T2, although they contained no DNA and were simply the remains of the original bacterial capsule. They concluded that the protein protected the DNA from DNAse, but that once the two were separated and the phage was inactivated, the DNAse could hydrolyze the phage DNA.

Experiment and conclusions

Hershey and Chase were also able to prove that the DNA from the phage is inserted into the bacteria shortly after the virus attaches to its host. Using a high speed blender they were able to force the bacteriophages from the bacterial cells after adsorption. The lack of 32P labeled DNA remaining in the solution after the bacteriophages had been allowed to adsorb to the bacteria showed that the phage DNA was transferred into the bacterial cell. The presence of almost all the radioactive 35S in the solution showed that the protein coat that protects the DNA before adsorption stayed outside the cell.
Hershey and Chase concluded that DNA, not protein, was the genetic material. They determined that a protective protein coat was formed around the bacteriophage, but that the internal DNA is what conferred its ability to produce progeny inside a bacteria. They showed that, in growth, protein has no function, while DNA has some function. They determined this from the amount of radioactive material remaining outside of the cell. Only 20% of the 32P remained outside the cell, demonstrating that it was incorporated with DNA in the cell's genetic material. All of the 35S in the protein coats remained outside the cell, showing it was not incorporated into the cell, and that protein is not the genetic material.
Hershey and Chase's experiment concluded that little sulfur containing material entered the bacterial cell. However no specific conclusions can be made regarding whether material that is sulfur-free enters the bacterial cell after phage adsorption. Further research was necessary to conclude that it was solely bacteriophages' DNA that entered the cell and not a combination of protein and DNA where the protein did not contain any sulfur.

Discussion

Confirmation


Hershey and Chase concluded that protein was likely not to be the hereditary genetic material. However, they did not make any conclusions regarding the specific function of DNA as hereditary material, and only said that it must have some undefined role.
Confirmation and clarity came a year later in 1953, when James D. Watson and Francis Crick correctly hypothesized, in their journal article "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid", the double helix structure of DNA, and suggested the copying mechanism by which DNA functions as hereditary material. Furthermore, Watson and Crick suggested that DNA, the genetic material, is responsible for the synthesis of the thousands of proteins found in cells. They had made this proposal based on the structural similarity that exists between the two macromolecules, that is, both protein and DNA are linear sequences of amino acids and nucleotides respectively.

Other experiments

Once the Hershey–Chase experiment was published, the scientific community generally acknowledged that DNA was the genetic code material. This discovery led to a more detailed investigation of DNA to determine its composition as well as its 3D structure. Using X-ray crystallography, the structure of DNA was discovered by James Watson and Francis Crick with the help of previously documented experimental evidence by Maurice Wilkins and Rosalind Franklin. Knowledge of the structure of DNA led scientists to examine the nature of genetic coding and, in turn, understand the process of protein synthesis. George Gamow proposed that the genetic code was composed of sequences of three DNA base pairs known as triplets or codons which represent one of the twenty amino acids. Genetic coding helped researchers to understand the mechanism of gene expression, the process by which information from a gene is used in protein synthesis. Since then, much research has been conducted to modulate steps in the gene expression process. These steps include transcription, RNA splicing, translation, and post-translational modification which are used to control the chemical and structural nature of proteins.Moreover, genetic engineering gives engineers the ability to directly manipulate the genetic materials of organisms using recombinant DNA techniques. The first recombinant DNA molecule was created by Paul Berg in 1972 when he combined DNA from the monkey virus SV40 with that of the lambda virus.
Experiments on hereditary material during the time of the Hershey-Chase Experiment often used bacteriophages as a model organism. Bacteriophages lend themselves to experiments on hereditary material because they incorporate their genetic material into their host cell's genetic material (making them useful tools), they multiply quickly, and they are easily collected by researchers.


The Importance of Biotechnology in Today’s Time


The Importance of Biotechnology in Today’s Time




Biotechnology is the third wave in biological science and represents such an interface of basic and applied sciences, where gradual and subtle transformation of science into technology can be witnessed. Biotechnology is defined as the application of scientific and engineering principals to the processing of material by biological agents to provide goods and services. Biotechnology comprises a number of technologies based upon increasing understanding of biology at the cellular and molecular level.
The Bible already provides numerous examples of biotechnology. Namely, it deals with the conversion of grapes to wine, of dough to bread and of milk to cheese. The oldest biotechnological processes are found in microbial fermentations, as born out by the Babylonian tablet dated circa 6000 B.C., explaining the preparation of beer. The Sumerians were able to brew as many as twenty types of beer in the third millennium B.C. In about 4000 B.C. leavened bread was produced with the aid of yeast. During Vedic period (5000-7000 B.C.) Aryans had been performing daily Agnihotra or Yajna. In Ayurved, production of ‘Asava’ and ‘Arista’ using different substrates and flowers of mahua (Madhuca indica) or dhataki (Wodfordiafructicosa) has been well characterized till today since Vedic period. One of the materials used in Yajna is animal fat (i.e. ghee) which is fermented product of milk. The term ‘biotechnology’ was described in a Bulletin of the Bureau of Biotechnology published in July, 1920 from the office of the same name in Leeds in Yorkshire. The articles in this bulletin described the varied roles of microbes in leather industry to pest control.
There are numerous sub-fields of biotechnology. They are:
  1. Red biotechnology is biotechnology applied to medical processes. Some examples are the designing of organisms to produce antibiotics, and the engineering of genetic cures to cure diseases through genomic manipulation.
  2. White biotechnology, also known as grey biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. White biotechnology tends to consume less in resources than traditional processes when used to produce industrial goods.
  3. Green biotechnology is biotechnology applied to agricultural processes. An example is the designing of an organism to grow under specific environmental conditions or in the presence (or absence) of certain agricultural chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby eliminating the need for external application of pesticides. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.
  4. The term blue biotechnology has also been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.
Broadly biotechnology can be divided into two major branches:
  1. Non-gene biotechnology– deals with whole cell, tissues or even individual organisms
  2. Gene biotechnology– involves gene manipulation, cloning, etc.
Non-gene biotechnology is a more popular practice, and plant tissue culture, hybrid seed production, microbial fermentation, production of hybridoma antibodies or immunochemicals are wide spread biotechnology practices.
For centuries humans have used microorganisms to produce foods and drinks without understanding the microbial processes underlying their production. In recent years the understanding of the biosynthetic pathways and regulatory control mechanisms used by microorganisms for production of several metabolites has been increased by developing the knowledge of biochemistry of industrially important organisms. Notable biotechnologies for food processing include fermentation technology, enzyme technology and monoclonal antibody technology. Beneficial microbes participate in fermentation processes, producing many useful metabolites such as enzymes, organic acids, solvents, vitamins, amino acids, antibiotics, growth regulators, flavors and nutritious foods. Some leading food bioprocessing technologies are dairy processing, alcohol and beverage processing. Production of alcoholic beverages include: wine, beer, whiskey, rum, shake, etc. utilizing microorganisms like Clostridium acetobutylicumLecuonostoc mesenteroidesAspergillus oryzae,Saccharomyces cerevisiaeRizopus sp.Mucor sp., etc. Biotechnologically produced organic acids like citric acid, acetic acid, gluconic acid, D-Lactic acid, fumaric acid, etc. also has very high market value.
The application of biotechnology can result in (a) new ways of producing existing products with the use of new inputs, and (b) new ways of producing new products. Examples of the former include the production of gasoline from ethanol which in turn is produced from sugar; the production of insulin using recombinant DNA technology; the production of hepatitis B vaccine using recombinant DNA technology and the extraction of copper using mineral leaching bacteria. The alternative inputs are oil for gasoline, porcine prancreases for insulin, human blood for hepatitis vaccine, and the conventional mining techniques for copper. Examples of the latter include possible medicinal substances which are produced in minute quantity in the human body and which cannot be synthesized such as insulin, interleukin or Tissue Plasminogen Activator (TPA).
A wide variety of microorganisms are now being employed as tools in biotechnology to produce useful products or services. Raw materials can be converted to useful finished products both by ordinary chemical processes and by biological means. Generally, the costs of chemical conversion are quite high as the reactions require high temperature or pressure. In contrast, biological alternatives, using microbes or cultured animal or plant cells, operate at physiologically normal conditions of temperature, pressure, pH, etc. During the next few decades biotechnology would have overtaken chemical technology, and many such chemicals which are today produced chemically would be made through biotechnology.
Enzyme technology is an area of considerable current interest and development. Enzymes are biological catalysts and have been used for many years as isolated agents particularly in food e.g. rennin, papain and invertase. These enzymes have increasingly replaced plants and animal enzymes; thus amylase from Bacillus and Aspergillus have substituted those of malted wheat and barley in brewing, baking and biscuit-making and also in the textile industry, etc. Today, enzyme technologies have four distinct areas of application: in cosmetics, therapy, the food and feed industry, and for diagnostic purposes. One very important recent application is the production of foodstuffs from non-traditional raw materials: for instance, the development of the sweetener, high fructose corn syrup (HFCS), also called isoglucose. Another recent application is the use of phytase in animal feed.
Nowadays, interest in the traditional fermentation technology for food processing has greatly increased because of emphasis placed upon plant materials as human foods. Single-cell protein (SCP) is term generally accepted to mean the microbial cells (algae, bacteria, actinomycetes and fungi) grown and harvested for animal or human food. During World War II, when there were shortages in proteins and vitamins in the diet, the Germans produced yeasts and a mold (Geotrichum candidum) in some quantity for food. Research on SCP has been stimulated by a concern over the eventual food crisis or food shortages that will occur if the world’s population is not controlled. Many scientists believe that the use of microbial fermentations and the development of an industry to produce and supply SCP are possible solutions to meet a shortage of protein if and when the amount of protein produced or obtained by agriculture and fishing becomes insufficient.
The roots of molecular biology were established only after the British biophysicist Francis Crick and the American biochemist James Watson, in 1953, proposed the structure of DNA (deoxyribonucleic acid) molecule which is well known as the chemical bearer of genetic information of most of the organisms. We really began understanding and utilizing molecular biotechnology (or gene biotechnology) only after recombinant DNA technology was developed in 1970’s. Daniel Nathans (in 1971) of John Hopkins University utilized the restriction enzyme to split DNA of monkey tumor virus, Simian Virus (SV40). Recombinant DNA technology, often referred to as genetic engineering or gene manipulation, involves extraction of a particular gene of interest form one organism and then insertion of the gene into other organisms. Genetic manipulation may be defined as the extracellular (i.e. in vitro) creation of new forms of arrangements of DNA in such a way as to allow the incorporation or continued propagation of altered genetic condition in nature. Among the first scientist to attempt genetic manipulation was Paul Berg of Stanford University who in 1971 along with his co-workers opened the DNA molecule of SV40 and spliced it into a bacterial chromosome and constructed the first recombinant DNA molecule.
The genetic engineering techniques are useful tools for genetic research. They can help to gain in the structure, function and regulation of genes. They also help to prepare the physical maps of viral genome. Maps of several viruses have been made available like SV40, Polyoma virus and adenovirus. Another goal in genetic engineering is to design super bug which can degrade most of the major hydrocarbon components of petroleum. The different strains of Pseudomonas putida contain a plasmid which has genes coding for enzymes that digest a single family of hydrocarbons. By crossing the various strains of this bacterium, a super bug has been created. The multiplasmid bacterium is able to grow on a diet of crude oil. The super bug has potential for clearing up oil spills.
Biotechnology is widely used in pharmacy to create more efficient and less expensive drugs. Recombinant DNA technology is used for production of specific enzymes, which enhance the rate of production of particular range of antibodies in the organism. The hormones such as somatostatin, insulin and the human growth hormone can be synthesized easily and cheaply. The first human hormone to be synthesized by genetic engineering was somatostatin. Somatostatin is brain hormone originating from hypothalamus. It acts to inhabit the release of human growth hormone and insulin is related to treatment of diabetes, pancreatis and few other conditions. Genetech, a California based company, has produced human growth hormone (hGH) from genetically engineered bacteria. Human insulin or humulin is the first genetically engineered pharmaceutical product, developed by Eli Lilly and company in 1982. Bovine Somatotropin (BST) is produced for a large quantity of milk production in cows. Antibiotics are chemical substances produced by several microorganisms. Recombinant DNA technology has helped in increased production of antibiotics; for example, the rate of penicillin produced at present is about 150,000 unit/ml against about 10 unit/ml in 1950s. Antibiotics produced using such technology have very specific effects and cause fewer side effects. Currently, scientists are working on vaccines for fatal illnesses such as AIDS, hepatitis, malaria, flu, and even some forms of cancer. Interferon, an anti-viral protein, is prepared from the mammalian cells by recombinant DNA technology. By cloning cDNA to genes for human interferon, it has been found that there are large number of interferon differing in amino acid sequences and properties. A large number of interferon is prepared in yeast cells by fermentation process. Shrof expects that in the near future vaccines will come in more convenient ways “some will come in the form of mouthwash; others will be swallowed in time-release capsules, avoiding the need for boosters.” (Shrof 57)
One of the best known applications of genetic engineering is that of the creation of genetically modified organisms (GMOs). There are potentially momentous biotechnological applications of GM, for example oral vaccines produced naturally in fruit, at very low cost. This represents, however, a spread of genetic modification to medical purposes and opens an ethical door to other uses of the technology to directly modify human genomes. A genetically modified food is a food product derived in whole or part from a genetically modified organism (GMO) such as a crop plant, animal or microbe such as yeast. Genetically modified foods have been available since the 1990s. The principal ingredients of GM foods currently available are derived from genetically modified soybean, maize and canola. Between 1996 and 2001, the total surface area of land cultivated with GMOs had increased by a factor of 30, from 17,000 km² (4.2 million acres) to 520,000 km² (128 million acres). The value for 2002 was 145 million acres (587,000 km²) and for 2003 was 167 million acres (676,000 km²). (Internet 6) Future applications of GMOs include bananas that produce human vaccines against infectious diseases such as Hepatitis B, fish that mature more quickly, fruit and nut trees that yield years earlier, and plants that produce new plastics with unique properties. Now scientists have transformed Tobacco Mosaic Virus (TMV) to infect host plants and produce immunizing proteins rather than debilitating leaf shrivel, turning greenhouse tobacco into a biofactory for plague vaccine.
Genetic diseases could be treated through the use of genetic engineering. Defective genes in an organism cause genetic disorders. If a defective gene could be identified and located in a particular group of cells – it could be replaced with a functional one. The transgenic cells are then planted into the organism, resulting in a cure of the disorder. Cloning is a relatively new sector of biotechnology, but it promises answers to very important problems related to surgery. Tissues and organs could be cloned for surgical purposes. If scientists could isolate stem cells and then direct their development, they would be able to create any kind of a tissue, organ or even a whole part of a body.
Another revolutionizing tool of biotechnology is DNA fingerprinting. DNA fingerprints are useful in several applications of human health care research, as well as in the justice system. DNA fingerprinting is used to diagnose inherited disorders in both prenatal and newborn babies in hospitals around the world. These disorders may include cystic fibrosis, hemophilia, Huntington’s disease, familial Alzheimer’s, sickle cell anemia, thalassemia, and many others. Early detection of such disorders enables the medical staff to prepare themselves and the parents for proper treatment of the child. In some programs, genetic counselors use DNA fingerprint information to help prospective parents understand the risk of having an affected child. DNA fingerprint information can also help in developing cures for inherited disorders. DNA fingerprints helps to link suspects to biological evidence – blood or semen stains, hair, or items of clothing – found at the scene of a crime and help in solving crime. Another important use of DNA fingerprints in the court system is to establish paternity in custody and child support litigation. The U.S. armed services have just begun a program to collect DNA fingerprints from all personnel for use later, in case they are needed to identify casualties or persons missing in action or for suspect verification.
Due to the revolutionary development of biotechnology during last couple of decades agriculture has drastically advanced. Sensational achievements were made in both plant cultivation and animal husbandry. Plants have been improved in four different ways:
  • Enhanced potential for more vigorous growth and increasing yields
  • Increased resistance to natural predators and pests, including insects and disease-causing microorganisms.
  • Production of hybrids exhibiting a combination of superior traits derived from two different strains or even different species
  • Selection of genetic variants with desirable qualities such as increased protein value, increased content of limiting amino acids, which are essential in the human diet, or smaller plant size, reducing vulnerability to adverse weather condition.
Another important area of biotechnology is improvement of livestock. Improvement in disease control, efficiency of reproduction, yields of livestock products i.e. meat, milk, wool, eggs, composition of livestock products i.e. leaner meat, feed value of low quality feeds i.e. straw; are some of the applications of biotechnology.
One of the major scientific revolutions of the twentieth century was the breaking of the genetic code and the development of tools that enable scientists to probe the molecules of life with incredible precision. Now, in the twenty-first century, these developments in biology are being married with the use of ever-increasing computer power to help us face the challenges that the new century brings. Bioinformatics is the name given to the new discipline that has emerged at the interface of biology and computing. Huge amount of genetic data (DNA, RNA, amino acid and protein sequences) of various organisms, form bacteria to humans, being generated worldwide is stored in a computer database. Specialized software programs are used to find, visualize, and analyze the information, and most importantly, communicate it to other people. Various computer tools are used to predict protein structure which is a valuable information for development of vaccines, diagnostic tools as well as more effective drugs. Bioinformatics can help in easy and early detection of various diseases like cancer, diabetes and many more with the help of microarray chips (microarrays are miniature arrays of gene fragments attached to glass slides). Bioinformatics also helps scientists to construct phylogenetic tree based on molecular biology and ultimately contribute in the study of evolution. Computer simulations model such things as population dynamics, or calculate the cumulative genetic health of a breeding pool (in agriculture) or endangered population (in conservation). One very exciting potential of this field is that entire DNA sequences or genomes of endangered species can be preserved.
Biotechnology has a promising future. In future biotechnology will be accredited for some revolutionary technology. Recent advances in bioenergy, bioremediation, synthetic biology, DNA computers, virtual cell, genomics, proteomics, bioinformatics and bio-nanotechnology have made biotechnology even more powerful. Recent discovery of conduction of electricity by DNA and its behavior as a superconductor has opened a new realm in modern science. In future biotechnology will have profound impact in world economy. Biotechnology is a golden tool to solve some of the key global problems like global epidemic, fatal diseases, global warming, rising petroleum fuel crisis and above all poverty.
For all the positive effects of biotechnology there are some possible side effects. Nobody knows what ecological hazards could be caused by transgenic organisms. Some even speculate that some transgenic organisms could fall into wrong hands to develop bioweapons. The opposition of genetic engineering says that – the science is very young and needs a lot more research.
The path from a test tube to the field is not a straight highway. Both intellectual and financial resources should be realized before new discoveries pave their way to industrial applications. In conclusion, biotechnology has also proved to be extremely productive and innovative and 21st century should be the century of biotechnology.

Everything you wanted to know

Everything you wanted to know





Introduction
Sixty-five million years ago the dinosaurs died out along with more than 50% of other life forms on the planet. This mass extinction is so dramatic that for many years it was used to mark the boundary between the Cretaceous Period, when the last dinosaurs lived, and the Tertiary Period, when no dinosaurs remained. This is called the Cretaceous/Tertiary (or K/T) boundary, and the associated extinction is often termed the K/T extinction event.The name "Tertiary" is a holdover from the early days of geology, and many geologists now prefer the term "Paleogene" for the time period that immediately follows the Cretaceous. These scientists refer to the Cretaceous/Paleogene or K/P boundary, which represents the same moment in time as the K/T event. Since their discovery in the nineteenth century, the reason for the dinosaurs' demise has been a matter of speculation and debate. Early paleontologists, working prior to Darwin's theory of evolution by natural selection, suggested that dinosaurs represented the remains of animals that had perished in the Biblical Flood. This explained both the fact and speed of their disappearance. But as other extinctions came to light, and Darwin's theory gained acceptance, this explanation fell out of favor.

For many decades, the fossil record of dinosaurs was poorly known. During that time it was clear that dinosaurs had gone extinct, but it was not yet understood that this extinction was relatively sudden and simultaneous with those of many other species. Only at the end of the nineteenth century did paleontologists realize that nearly all dinosaurs had gone extinct within a brief period of time at the end of the Cretaceous Period.
For most of the next century, scientists focused on explanations for how the extinction might have occurred. Most theories focused on climate change, perhaps brought on by volcanism, lowering sea level, and shifting continents. But hundreds of other theories were developed, some reasonable but others rather far-fetched (including decimation by visiting aliens, widespread dinosaur "wars", and "paläoweltschmertz"­the idea that dinosaurs just got tired and went extinct). It was often popularly thought that the evolving mammals simply ate enough of the dinosaurs' eggs to drive them to extinction.
Regardless of the details, most of these theories shared the common thought that dinosaurs were a group of animals that had reached the end of their evolutionary life. Their extinction was seen as inevitable, the product of having evolved for too long. In most extinction scenarios, the dinosaurs were simply unable to cope with competition from mammals and the changing climate, and so they all went extinct.

As dinosaur science began to alter this hypothesis, producing a new view of dinosaurs as successful and viable organisms, many of these extinction theories became less tenable. New information from fossil localities suggested that many other organisms, most unrelated to dinosaurs, had also gone extinct at the same time. New theories were required to explain these new discoveries and newly understood facts. A favored theory was that tectonically induced climate change interfered with food chains, disrupting them enough to cause widespread extinction among many different organisms.

lvarez Hypothesis: Origin and Evidence
Parrish illustration of asteroid blastIn the late 1970's geologist Walter Alvarez, and his father, Nobel-prize winning physicist Luis Alvarez, identified an unusual clay layer at the K/T boundary in Italy. This clay contained an unusually high concentration of the rare-earth element iridium ­ 30 times the level typically found in the Earth's crust. Why was the discovery of iridium so important? Although iridium is rare in the crust, it is abundant in many meteorites and asteroids as well as the Earth's core. With this evidence, Alvarez hypothesized that an asteroid must have struck the Earth right at the K/T boundary. Further investigation has revealed that this iridium-rich layer of clay occurs at more than 100 sites around the world, providing evidence that this was truly a worldwide event.
Gubbio clayIt was estimated that to produce the amount of iridium in the clay layer, the impact object would have been 10 km in diameter. Further evidence of an impact was discovered in the form of small grains of impact-shocked quartz and beads of impact glass (tektites) within the clay layer. Shocked quartz is formed by high-pressure shock waves, and is found at nuclear bomb sites and in meteor craters. Tektites are formed from the condensation of vaporized meteorite particles. Although shocked quartz has been found in K/T layers worldwide, tektites decrease in size with increasing distance from the impact site until they are altogether absent.
TektiteThese pieces, along with high levels of iridium, provide evidence for an extraterrestrial impact at the end of the Cretaceous Period. Thus, the end of the dinosaurs’ reign may have been caused by an asteroid, not by sea level change or volcanism. Initially this theory was highly controversial, but today an extraterrestrial impact is considered to be a key factor in the K/T extinction event.
One of the main objections to the Alvarez theory was the absence of a 65-million-year-old crater anywhere on the Earth’s surface. Surely such an enormous asteroid impact would have left a sizable crater behind. In 1991, geologists discovered evidence for a huge crater at Chicxulub (pronounced CHIK-shoo-loob), on the Yucatan Peninsula in Mexico. Although the crater had long since been buried by hundreds of meters of sediment, surveys of magnetic and gravitational fields revealed its circular structure. In addition, recent sensitive topographic mapping has shown a low mound that represents part of the crater’s rim. At 180 km across, and dated to 65 mya, the crater is of the right size and age to have been caused by a 10 km asteroid hitting Earth at the end of the Cretaceous Period.



Effects of the Asteroid Impact
The devastation caused by such an event is difficult to imagine. The asteroid would have hit with the force of 100,000 billion tons of TNT. This would have generated an earthquake one thousand times greater than the largest ever recorded, with winds of over 400 kph. A massive fireball would have boiled nearby seas, destroying everything for thousands of kilometers. Forests throughout most of North America and some of South America would have been flattened by the shock wave. Evidence of a giant tsunami has been found around the Gulf of Mexico and Caribbean, as well as in Spain and Brazil. It may have had an effect as far away as New Zealand. Map showing asteroid impact in Gulf of Mexico
Despite the enormity of the destruction from the initial impact, the dinosaurs and their contemporaries might have survived and eventually recovered, but the subsequent long-term effects of the blast were even more deadly. Ninety thousand cubic kilometers of debris would have been blasted into the atmosphere, some reaching into space only to re-enter at high speeds. This could have heated the atmosphere sufficiently to ignite global forest fires. While the heavier pieces of ejecta settled back down on Earth, fine dust particles would have remained in the atmosphere and significantly blocked sunlight, causing an effect called an “impact winter”. There is much debate about the duration and severity of the impact winter following the K/T impact, but the darkness and cold temperatures might have reduced photosynthesis and collapsed food chains globally.

The amount of carbon and sulfur contained in the rock at the impact site would have aggravated these devastating effects. As much as 100 billion tons of sulfur and 10 trillion tons of carbon would have been vaporized by the impact and blown into the atmosphere. The resulting sulfate aerosols would have stayed in the atmosphere for several years; the resulting carbon dioxide would have stayed airborne for several hundred years. Initially the sulfate aerosols would have contributed to global cooling by blocking out the sun, before precipitating as acid rain. After the dust and sulfates settled out and ended the cooling, global warming would have begun. The carbon dioxide levels, being two to three times normal, would have caused extreme greenhouse conditions, raising global temperatures by as much as 10°C. Although some life forms may have survived the years of darkness and freezing temperatures, many surely died out in the subsequent centuries of heat


Other Extinction Hypotheses
Although the impact hypothesis is the most widely accepted explanation of the K/T extinction, other theories still remain. Evidence of widespread volcanism, particularly at the Deccan traps in India, correlates with this moment in time as well. Prolonged volcanism could have led to atmospheric and climatic changes similar to those proposed for an asteroid impact. However, volcanism does not provide an alternate explanation for the high levels of iridium in the clay layer, because high concentrations of iridium occur deep in Earth's core rather than in the mantle, which is the source of the magma that was erupted.
One debate centers on whether the extinction was truly as sudden as it appears, or whether this is an artifact of the geological record. Some scientists believe that dinosaurs went extinct gradually, and were doing so for millions of years prior to the K/T boundary. Studies in the Western Interior of North America have suggested that the latest layers of Cretaceous sediments contain fewer dinosaur species than those below. These results have been challenged by other researchers, who claim that no such decrease is apparent in the Late Cretaceous record.

Deep-sea Evidence for the Impact Hypothesis
Cretaceous foram specimensThe general acceptance of the K/T asteroid impact theory has led many scientists to focus on the specific mechanisms that may have contributed to this dramatic extinction event. Although the impact was an important factor in the extinction of so many organisms, the event has also proven to be complex. In particular, the selectivity of the extinction has puzzled many paleontologists: why did dinosaurs go extinct but not crocodiles or turtles? Why did marine reptiles, belemnites, and ammonites disappear, but not fish or sharks? Why some mammals and not others?

Tertiary foram specimensOther scientists have focused on the extinction record preserved in deep-sea sediments in order to better understand the chain of events that followed the asteroid impact. Dr. Brian T. Huber, micropaleontologist in the National Museum of Natural History Dept. of Paleobiology, has studied evidence from a deep-sea drilling core taken 500-580 km of the northeastern coast of Florida during an Ocean Drilling Program cruise. Huber studied microscopic marine organisms called foraminifera taken from the core. The specimens were extracted from both Cretaceous and Tertiary age sediments. In one 40 cm core interval, he noticed a dramatic difference between the types of planktonic (floating) foraminifera that were alive prior to the boundary event and those that lived after. Prior to the extinction, large, ornate planktonic foraminifera were abundant, but afterward most specimens belonged to smaller, less ornate species. Overall more than 90% of the Cretaceous planktonic foraminifera had gone extinct. This is comparable to the extinction rate of calcareous nannofossils, another group of microscopic fossils that are abundant in the deep-sea sediment. In addition to the foraminifera, Huber also found specimens of shocked quartz and tektites, direct evidence of the impact itself.
deep-sea core showing dar-colored impact debris
The core also offered visual clues to the changes that occurred at the time of the extinction. The sediment undergoes a dramatic color change from white Cretaceous chalk in the lower portion of the core, to a dark gray, coarse-grained tektite layer in the middle, to a whitish gray Tertiary muddy chalk in the upper part. At the top of the tektite layer is a very thin, rust-colored, iron-rich layer known as the fireball layer. This rust layer, which has been found at a number of complete K/T impact horizons around the world, contains actual particles of the asteroid along with fine soot and ash that rained down on Earth's surface after the collision. This provides further evidence supporting the asteroid impact hypothesis.



Post-Extinction Recovery
It has been estimated that the planet took 1-2 million years to fully recover from the asteroid impact. In deep-sea sediments, several very small sized species of foraminifera with simple, unadorned shells appeared within several thousand years after the extinction event, but several million years elapsed before species diversity, shell ornamentation, and shell sizes increased to values comparable to those that occurred before the impact event. The small sized planktonic foraminifera are considered opportunistic species that had rapid rates of reproduction and higher tolerances to changing environmental conditions.

A similar pattern of extinction and recovery has been observed in the North American fossil land plant record. In southwestern North Dakota, where the fossil record of land plants is most complete and best studied, abrupt extinction of 70 to 90% of plant species was immediately followed by a dramatic increase in the abundance of ferns. Because the North American forests were decimated by the asteroid impact, ferns were able to rapidly disperse and dominate much of the newly cleared land surface for hundreds to thousands of years afterwards. Full recovery of North American forests, resulting from appearance of new species and repopulation by surviving species, took from several hundred thousand to over a million years.
Parrish Illustration of life post-extinction
Of the many long-term effects produced by the global devastation at the K/T boundary, the most obvious is the disappearance of all non-avian dinosaurs. Yet the close of the Age of Dinosaurs meant the start of the Age of Mammals. Although mammals had existed alongside the dinosaurs for hundreds of millions of years, they had remained small and comparatively rare. The extinction of the dinosaurs allowed mammals to come into dominance, as they evolved into new and larger forms throughout the Tertiary Period.

Within the first five million years of the Paleocene Epoch, large mammals had appeared for the first time. Some of them were the earliest members of modern groups, including primitive carnivorans and ungulates. The first primates (members of the mammalian order that includes humans) appeared about 10 million years after the K/T boundary event. Modern bird groups diversified as well, in the absence of pterosaurs (which had also gone extinct). Perhaps without the extinction of the dinosaurs, the evolution of mammals and the subsequent rise of humans would have never happened. And although recent history might well be called the Human Age, the time that the human race has dominated planet Earth is but a blink in geologic terms. It is certain that the world will change again. Indeed, we may be in the midst of another mass extinction event right now.