Friday, 17 August 2012

Greshof Number


Grashof number, Gr, is a nondimensional parameter used in the correlation of heat and mass transfer due to thermally induced natural convection at a solid surface immersed in a fluid. It is defined as
(1)
where
  • g = acceleration due to gravity, m s−2
  • l = representative dimension, m
  • ξ = coefficient of expansion of the fluid, K−1
  • ΔT = temperature difference between the surface and the bulk of the fluid, K
  • ν = kinematic viscosity of the fluid, m2s−1 .
The significance of the Grashof number is that it represents the ratio between the buoyancy force due to spatial variation in fluid density (caused by temperature differences) to the restraining force due to the viscosisty of the fluid.
The form of the Grashof number can be derived by considering the forces on a small element of fluid of volume.
The buoyancy force, Fb, on this element has the magnitude gl3Δρ, where Δρ is the difference in density between the element and the surrounding fluid. The order of magnitude of the viscous force, Fv, on the element is ηul, where η is the fluid viscosity, and u the velocity of the element relative to the surrounding fluid. Hence,
(2)
The order of magnitude of the velocity u may be obtained by equating viscous and momentum forces, i.e.,
(3)
or
(4)
Substituting this value into the ratio of buoyancy to viscous forces
(5)
and using the relationship
(6)
.
(7)
Since Reynolds number, Re, represents the ratio of momentum to viscous forces  the relative magnitudes of Gr and Re are an indication of the relative importance of natural and forced convection in determining heat transfer. Forced convection effects are usually insignificant when Gr/Re2 >> 1 and conversely natural convection effects may be neglected when Gr/Re2 << 1. When the ratio is of the order of one, combined effects of natural and forced convection have to be taken into account.

History Of Thermodynamics



 Basic physical notions of heat and temperature were established in the 1600s, and scientists of the time appear to have thought correctly that heat is associated with the motion of microscopic constituents of matter. But in the 1700s it became widely believed that heat was instead a separate fluid-like substance. Experiments by James Joule and others in the 1840s put this in doubt, and finally in the 1850s it became accepted that heat is in fact a form of energy. The relation between heat and energy was important for the development of steam engines, and in 1824 Sadi Carnot had captured some of the ideas of thermodynamics in his discussion of the efficiency of an idealized engine. Around 1850 Rudolf Clausius and William Thomson (Kelvin) stated both the First Law - that total energy is conserved - and the Second Law of Thermodynamics. The Second Law was originally formulated in terms of the fact that heat does not spontaneously flow from a colder body to a hotter. Other formulations followed quickly, and Kelvin in particular understood some of the law’s general implications. The idea that gases consist of molecules in motion had been discussed in some detail by Daniel Bernoulli in 1738, but had fallen out of favor, and was revived by Clausius in 1857. Following this, James Clerk Maxwell in 1860 derived from the mechanics of individual molecular collisions the expected distribution of molecular speeds in a gas. Over the next several years the kinetic theory of gases developed rapidly, and many macroscopic properties of gases in equilibrium were computed. In 1872 Ludwig Boltzmann constructed an equation that he thought could describe the detailed time development of a gas, whether in equilibrium or not. In the 1860s Clausius had introduced entropy as a ratio of heat to temperature, and had stated the Second Law in terms of the increase of this quantity. Boltzmann then showed that his equation implied the so-called H Theorem, which states that a quantity equal to entropy in equilibrium must always increase with time. At first, it seemed that Boltzmann had successfully proved the Second Law. But then it was noticed that since molecular collisions were assumed reversible, his derivation could be run in reverse, and would then imply the opposite of the Second Law. Much later it was realized that Boltzmann’s original equation implicitly assumed that molecules are uncorrelated before each collision, but not afterwards, thereby introducing a fundamental asymmetry in time. Early in the 1870s Maxwell and Kelvin appear to have already understood that the Second Law could not formally be derived from microscopic physics, but must somehow be a consequence of human inability to track large numbers of molecules. In responding to objections concerning reversibility Boltzmann realized around 1876 that in a gas there are many more states that seem random than seem orderly. This realization led him to argue that entropy must be proportional to the logarithm of the number of possible states of a system, and to formulate ideas about ergodicity. The statistical mechanics of systems of particles was put in a more general context by Willard Gibbs, beginning around 1900. Gibbs introduced the notion of an ensemble - a collection of many possible states of a system, each assigned a certain probability. He argued that if the time evolution of a single state were to visit all other states in the ensemble - the so-called ergodic hypothesis - then averaged over a sufficiently long time a single state would behave in a way that was typical of the ensemble. Gibbs also gave qualitative arguments that entropy would increase if it were measured in a "coarse-grained" way in which nearby states were not distinguished. In the early 1900s the development of thermodynamics was largely overshadowed by quantum theory and little fundamental work was done on it. Nevertheless, by the 1930s, the Second Law had somehow come to be generally regarded as a principle of physics whose foundations should be questioned only as a curiosity. Despite neglect in physics, however, ergodic theory became an active area of pure mathematics, and from the 1920s to the 1960s properties related to ergodicity were established for many kinds of simple systems. When electronic computers became available in the 1950s, Enrico Fermi and others began to investigate the ergodic properties of nonlinear systems of springs. But they ended up concentrating on recurrence phenomena related to solitons, and not looking at general questions related to the Second Law. Much the same happened in the 1960s, when the first simulations of hard sphere gases were led to concentrate on the specific phenomenon of long-time tails. And by the 1970s, computer experiments were mostly oriented towards ordinary differential equations and strange attractors, rather than towards systems with large numbers of components, to which the Second Law might apply. Starting in the 1950s, it was recognized that entropy is simply the negative of the information quantity introduced in the 1940s by Claude Shannon. Following statements by John von Neumann, it was thought that any computational process must necessarily increase entropy, but by the early 1970s, notably with work by Charles Bennett, it became accepted that this is not so, laying some early groundwork for relating computational and thermodynamic ideas. 

Notes on Thermodynamics





Thermodynamics (from the Greek thermos meaning heat and dynamis meaning power)
is a branch of physics that studies the effects of changes in temperature, pressure, and
volume on physical systems at the macroscopic scale by analyzing the collective motion
of their particles using statistics. Roughly, heat means "energy in transit" and dynamics
relates to "movement"; thus, in essence thermodynamics studies the movement of energy
and how energy instills movement. Historically, thermodynamics developed out of the
need to increase the efficiency of early steam engines.
The starting point for most thermodynamic considerations are the laws of
thermodynamics, which postulate that energy can be exchanged between physical
systems as heat or work. They also postulate the existence of a quantity named entropy,
which can be defined for any system. Central to this are the concepts of
system and surroundings. A system is composed of particles, whose average motions
define its properties, which in turn are related to one another through equations of state.
Properties can be combined to express internal energy and thermodynamic potentials are
useful for determining conditions for equilibrium and spontaneous processes.

Quotes
 "Thermodynamics is the only physical theory of universal content which,
within the framework of the applicability of its basic concepts, I am
convinced will never be overthrown." — Albert Einstein
 "The law that entropy always increases - the Second Law of
Thermodynamics - holds, I think, the supreme position among the laws of
physics. If someone points out to you that your pet theory of the universe is
in disagreement with Maxwell's equations - then so much the worse for
Maxwell's equations. If it is found to be contradicted by observation - well.

Thursday, 16 August 2012

Cell & its invention


The conventional construction of a flashlight-type cell involves a zinc anode can with a depolarizer mix filling most of the can and having a carbon rod in the center as a current collector for the depolarizer mix. The cell is sealed by a soft asphaltic pitch and a metal cap which fits over the carbon rod and serves as the positive terminal. An airspace is provided above the depolarizer mix and below the pitch to permit the collection of gases and cell exudate. The gases are formed during discharge of the cell and means have to be provided for venting the gases before too large a pressure develops within the cell. If the cell is rechargeable, more gases may be evolved during charging and it becomes more important that the gases be vented properly. 

There are disadvantages in the conventional cell construction. In particular, several assembly stations are required for placing a seal washer down into the cell, pouring the asphaltic pitch onto the seal washer, placing a vent washer on top of the pitch and finally placing the terminal cap on top of the cell and locking it in place. The vent washer on top of the pitch is used to prevent the cap from becoming embedded in the pitch to form a gastight seal which could prevent venting. 

The use of the asphaltic pitch makes the sealing operation dirty and somewhat expensive due to the several steps required in the operation. In addition, the soft pitch does at times squeeze up around the top washer and cause a gastight seal around the terminal cap edge. Another disadvantage is that high temperatures may soften the pitch and cause it to leak from the cell container.

Linear search in Dev Cpp





#include <stdio.h>
#include <conio.h>

#define SIZE 100

//int linearsearch(const int array[],int key,int SIZE)
int linearSearch( const int array[], int key, int size )            
    {
    int n;
   
           for(n=0; n<size; n++)
         {
                     if(array[n]==key)
                      {
                          return n;
                        }
          }
   
    return -1;
    }

int main()
{

int a[SIZE];
int x;
int searchkey;
int element;

for(x=0; x<SIZE; x++)
{
         a[x]=2*x;
}
printf("Enter Integar searchkey:\n");
scanf("%d",&searchkey);

element=linearSearch(a,searchkey,SIZE);

if(element=!-1)
{
printf("found value in the element %d \n",element);
}
else
{
    printf("value not found");
}

getch();
return 0;

}

Matrix in Dev Cpp




#include<stdio.h>
#include<conio.h>

int main()
{
    int i,j;

  int x[2][2];
  int y[2][2];  
  int z[2][2]={z[0][0],z[0][1],z[1][0],z[1][1]};
  printf("Enter 4 elemrnts of X[2][2]\n");
  scanf("%d%d%d%d",&x[0][0],&x[0][1],&x[1][0],&x[1][1]);
   printf("Enter 4 elemrnts of Y[2][2]\n");
  scanf("%d%d%d%d",&y[0][0],&y[0][1],&y[1][0],&y[1][1]);
  z[0][0]=x[0][0]+y[0][0];
  z[0][1]=x[0][1]+y[0][1];
  z[1][0]=x[1][0]+y[1][0];
  z[1][1]=x[1][1]+y[1][1];

 
       
            printf("|%d %d | |%d %d | |%d %d  |",x[0][0],x[0][1],y[0][0],y[0][1],z[0][0],z[0][1]);
            printf("\n|    |+|    |=|     |");
            printf("\n|%d %d | |%d %d | |%d %d |",x[1][0],x[1][1],y[1][0],y[1][1],z[1][0],z[1][1]);
           
                 
            getch();
            return 0;
 
    }

Description of Dry Cell




Dry cell batteries A dry cell is the most common type of battery used today, according to Robert Asato, Ph.D., of Leeward Community College, in Pearl City, Hawaii. Dry cell batteries power small portable electronic devices, such as flashlights, audio players, watches, cameras and TV remotes. Disposable or rechargeable, dry cells range from pencil tip-sized batteries used in medical applications to enormous batteries designed to provide backup for cities in case of power outages.
    History According to Energizer, archaeological digs suggest batteries were invented in some form at least 2,000 years ago, but the forerunners of batteries we know today began in 1798 with Count Alessandro Volta's "voltaic pile," a crude battery made with copper and a salt or acid solution. French chemist Georges Leclanche's 1868 design of the "wet" cell battery was the forerunner of the first "dry" cell, invented in 1888 by German scientist Carl Gassner, which was similar to modern carbon-zinc batteries.

Types Asato points to the carbon-zinc battery, with ammonium chloride providing the dry "paste" for the chemical reaction, as common type of dry cell. Alkaline batteries last longer, however, because the electrolyte sodium hydroxide or potassium hydroxide used is less corrosive of the zinc. Other types include silver batteries, mercury cell batteries often used for calculators, and nickel-cadmium and nickel metal hydride batteries, which can be recharged.

       FUNCTION A standard dry cell battery works by providing an electrochemical reaction when inserted into a device and completing a circuit between the cathode (often carbon), and anode (often zinc), separated by an electrolyte, which generates an electric current collected elsewhere in the cell that can then conduct the electricity to the exterior circuit.

Disposable or Rechargeable    Commonly, low voltage and rarely used devices such as flashlights use disposable carbon-zinc dry cell batteries as well as the longer-lasting alkaline dry cell batteries. The Green Living Tips website states that nickel cadmium (Nicads) or nickel metal hydride (NiMH) batteries are the most frequently purchased rechargeables, sold in most if not all the same sizes as disposables. Rechargeable batteries generally save you money, if the device being powered is used so often that fresh batteries are constantly required.