Your Ad Here

Senin, 22 Juni 2009

Cesium vapor magnetometer I Applications I Spin-exchange relaxation-free (SERF) atomic magnetometers I SQUID magnetometer


Cesium vapor magnetometer
A basic example of the workings of a magnetometer may be given by discussing the common "optically pumped cesium vapor magnetometer" which is a highly sensitive (0.004 nT/√Hz) and accurate device used in a wide range of applications. Although it relies on some interesting quantum mechanics to operate, its basic principles are easily explained.

The device broadly consists of a photon emitter containing a cesium light emitter or lamp, an absorption chamber containing cesium vapor and a "buffer gas" through which the emitted photons pass, and a photon detector, arranged in that order.

Polarization
The basic principle that allows the device to operate is the fact that a cesium atom can exist in any of nine energy levels, which is the placement of electron atomic orbitals around the atomic nucleus. When a cesium atom within the chamber encounters a photon from the lamp, it jumps to a higher energy state and then re-emits a photon and falls to an indeterminate lower energy state. The cesium atom is 'sensitive' to the photons from the lamp in three of its nine energy states, and therefore eventually, assuming a closed system, all the atoms will fall into a state in which all the photons from the lamp will pass through unhindered and be measured by the photon detector. At this point the sample (or population) is said to be polarized and ready for measurement to take place. This process is done continuously during operation.

Detection
Given that this theoretically perfect magnetometer is now functional, it can now begin to make measurements.

In the most common type of cesium magnetometer, a very small AC magnetic field is applied to the cell. Since the difference in the energy levels of the electrons is determined by the external magnetic field, there is a frequency at which this small AC field will cause the electrons to change states. In this new state, the electron will once again be able to absorb a photon of light. This causes a signal on a photo detector that measures the light passing through the cell. The associated electronics uses this fact to create a signal exactly at the frequency which corresponds to the external field.

Another type of cesium magnetometer modulates the light applied to the cell. This is referred a Bell–Bloom magnetometer after the two scientists who first investigated the effect. If the light is turned on and off at the frequency corresponding to the Earth's field, there is a change in the signal seen at the photo detector. Again, the associated electronics uses this to create a signal exactly at the frequency which corresponds to the external field.

Both methods lead to high performance magnetometers.

Applications
The cesium magnetometer is typically used where a higher performance magnetometer than the proton magnetometer is needed. In archaeology and geophysics, where the sensor is moved through an area and many accurate magnetic field measurements are needed, the cesium magnetometer has advantages over the proton magnetometer.

The cesium magnetometer's faster measurement rate allow the sensor to be moved through the area more quickly for a given number of data points.

The lower noise of the cesium magnetometer allows those measurements to more accurately show the variations in the field with position.

Spin-exchange relaxation-free (SERF) atomic magnetometers

At sufficiently high atomic density, extremely high sensitivity can be achieved. Spin-exchange-relaxation-free (SERF) atomic magnetometers containing potassium, cesium or rubidium vapor operate similarly to the cesium magnetometers described above yet can reach sensitivities lower than 1 fT/√Hz.

The SERF magnetometers only operate in small magnetic fields. The Earth's field is about 50 µT. SERF magnetometers operate in fields less than 0.5 µT.

As shown in large volume detectors have achieved 200 aT/√Hz sensitivity. This technology has greater sensitivity per unit volume than SQUID detectors.

The technology can also produce very small magnetometers that may in the future replace coils for detecting changing magnetic fields.

Rapid developments are ongoing in this area. This technology may produce a magnetic sensor that has all of its input and output signals in the form of light on fiberoptic cables. This would allow the magnetic measurement to be made in places where high electrical voltages exist.

SQUID magnetometer

SQUIDs, or superconducting quantum interference devices, measure extremely small magnetic fields; they are very sensitive vector magnetometers, with noise levels as low as 3 fT·Hz−0.5 in commercial instruments and 0.4 fT·Hz−0.5 in experimental devices. Until the advent of SERF atomic magnetometers in 2002, this level of sensitivity was unreachable otherwise.
These magnetometers require cooling with liquid helium (4.2 K) or liquid nitrogen (77 K) to operate, hence the packaging requirements to use them are rather stringent both from a thermal-mechanical as well as magnetic standpoint. SQUID magnetometers allow one to measure the magnetic fields produced by brain or heart activity (magnetoencephalography and magnetocardiography, respectively).

Early magnetometers
In 1833 Carl Friedrich Gauss, head of the Geomagnetic Observatory in Göttingen, published a paper on measurement of the Earth's magnetic field. [10] It described a new instrument that Gauss called a "magnometer" (a term which is still occasionally used instead of magnetometer). It consisted of a permanent bar magnet suspended horizontally from a gold fibre. A magnetometer is also called a gaussmeter.

Source: wikipedia

Magnetometer I Uses I Types I Fluxgate magnetometer


Magnetometer
A magnetometer is a scientific instrument used to measure the strength and/or direction of the magnetic field in the vicinity of the instrument. Magnetism varies from place to place and differences in Earth's magnetic field (the magnetosphere) can be caused by the differing nature of rocks and the interaction between charged particles from the Sun and the magnetosphere of a planet. Magnetometers are often a frequent component instrument on spacecraft that explore planets.

Uses
Magnetometers are used in geophysical surveys to find deposits of iron because they can measure the magnetic field variations caused by the deposits, using airplanes like the Shrike Commander. Magnetometers are also used to detect archaeological sites, shipwrecks and other buried or submerged objects. Magnetic anomaly detectors detect submarines for military purposes.

They are used in directional drilling for oil or gas to detect the azimuth of the drilling tools near the drill bit. They are most often paired up with accelerometers in drilling tools so that both the inclination and azimuth of the drill bit can be found.

Magnetometers are very sensitive, and can give an indication of possible auroral activity before one can see the light from the aurora. A grid of magnetometers around the world constantly measures the effect of the solar wind on the Earth's magnetic field, which is published on the K-index.
In space exploration
A three-axis fluxgate magnetometer was part of the Mariner 2 and Mariner 10 missions. A dual technique Magnetometer is part of the Cassini-Huygens mission to explore Saturn. This system is composed of a vector helium and fluxgate magnetometers. Magnetometers are also a component instrument on the Mercury MESSENGER mission. A magnetometer can also be used by satellites like GOES to measure both the magnitude and direction of a planet's or moon's magnetic field.

Types
Magnetometers can be divided into two basic types:

Scalar magnetometers measure the total strength of the magnetic field to which they are subjected, and

Vector magnetometers have the capability to measure the component of the magnetic field in a particular direction.

The use of three orthogonal vector magnetometers allows the magnetic field strength, inclination and declination to be uniquely defined. Examples of vector magnetometers are fluxgates, superconducting quantum interference devices (SQUIDs), and the atomic SERF magnetometer. Some scalar magnetometers are discussed below.

A magnetograph is a special magnetometer that continuously records data.

Rotating coil magnetometer
The magnetic field induces a sine wave in a rotating coil. The amplitude of the signal is proportional to the strength of the field, provided it is uniform, and to the sine of the angle between the rotation axis of the coil and the field lines. This type of magnetometer is obsolete.
Hall effect magnetometer

The most common magnetic sensing devices are solid-state Hall effect sensors. These sensors produce a voltage proportional to the applied magnetic field and also sense polarity.
Proton precession magnetometer

One type of magnetometer is the proton precession magnetometer, also known as the proton magnetometer, which measures the resonance frequency of protons (hydrogen nuclei) in the magnetic field to be measured, due to Nuclear Magnetic Resonance (NMR).

A direct current flowing in an inductor creates a strong magnetic field around a hydrogen-rich fluid, causing the protons to align themselves with that field. The current is then interrupted, and as protons are realigned with Earth's magnetic field they precess at a specific frequency. This produces a weak alternating magnetic field that is picked up by a (sometimes separate) inductor. The relationship between the frequency of the induced current and the strength of Earth's magnetic field is called the proton gyromagnetic ratio, and is equal to 0.042576 hertz per nanotesla (Hz/nT).

Because the precession frequency depends only on atomic constants and the strength of the external magnetic field, the accuracy of this type of magnetometer is very good. Magnetic impurities in the sensor and errors in the measurement of the frequency are the two causes of errors in these magnetometers.

If several tens of watts are available to power the aligning process, these magnetometers can be moderately sensitive. Measuring once per second, standard deviations in the readings in the 0.01 nT to 0.1 nT range can be obtained.

The strength of the Earth's magnetic field varies with time and location, so that the frequency of Earth's field NMR (EFNMR) for protons varies between approximately 1.5 kHz near the equator to 2.5 kHz near the geomagnetic poles.

The measurement of the precession frequency of proton spins in a magnetic field can give the value of the field with high accuracy and is widely used for that purpose. In low fields, such as the Earth's magnetic field, the NMR signal is weak because the nuclear magnetization is small, and specialised electronic amplifiers must be used to enhance the signal. Incorporated in existing portable magnetometers, these devices make them capable of measuring fields to an absolute accuracy of about one part in 106 and detecting field variations of about 0.1 nT. Typical variation of Earth's field strength at a particular location during its daily rotation is about 25nT (i.e. about 1 part in 2,000), with variations over a few seconds of typically around 1nT (i.e. about 1 part in 50,000).

Apart from the direct measurement of the magnetic field on Earth or in space, these magnetometers prove to be useful to detect variations of magnetic field in space or in time, caused by submarines, skiers buried under snow, archaeological remains, and mineral deposits
Fluxgate magnetometer

A fluxgate magnetometer consists of a small, magnetically susceptible, core wrapped by two coils of wire. An alternating electrical current is passed through one coil, driving the core through an alternating cycle of magnetic saturation, i.e., magnetised - unmagnetised - inversely magnetised - unmagnetised - magnetised. This constantly changing field induces an electrical current in the second coil, and this output current is measured by a detector. In a magnetically neutral background, the input and output currents will match. However, when the core is exposed to a background field, it will be more easily saturated in alignment with that field and less easily saturated in opposition to it. Hence the alternating magnetic field, and the induced output current, will be out of step with the input current. The extent to which this is the case will depend on the strength of the background magnetic field. Often, the current in the output coil is integrated, yielding an output analog voltage, proportional to the magnetic field.
Fluxgate magnetometers, paired in a gradiometer configuration, are commonly used for archaeological prospection. In Britain the most common such instruments to be used are the Geoscan FM series of instruments and the Bartington GRAD601. Both are capable of resolving magnetic variations as weak as 0.1 nT (roughly equivalent to one half-millionth of the Earth's magnetic field strength).

A wide variety of sensors are currently available and used to measure magnetic fields. Fluxgate magnetometers and gradiometers measure the direction and magnitude of magnetic fields. Fluxgates are affordable, rugged, compact and very low-power making them ideal for a variety of sensing applications. Fluxgate magnetometer sensors are manufactured in several geometries and recently have made significant improvements in noise performance, crossfield tolerance and power utilization

The typical fluxgate magnetometer consists of a "sense" (secondary) coil surrounding an inner "drive" (primary) coil that is wound around permeable core material. Billingsley Aerospace & Defense, Inc. currently manufactures four types of sensors: ring core, rod / Förster, racetrack and the recently developed Single Domain. Each sensor has magnetic core elements that can be viewed as two carefully matched halves. An alternating current is applied to the drive winding, which drives the core into plus and minus saturation. The instantaneous drive current in each core half is driven in opposite polarity with respect to any external magnetic field. In the absence of any external magnetic field, the flux in one core half cancels that in the other and the total flux seen by the sense coil is zero. If an external magnetic field is now applied, it will, at a given instance in time, aid the flux in one core half and oppose flux in the other. This causes a net flux imbalance between the halves, so that they no longer cancel one another. Current pulses are now induced in the sense winding on every drive current phase reversal (or at the 2nd, and all even harmonics). This results in a signal that is dependent on both the external field magnitude and polarity.

There are additional factors that affect the size of the resultant signal. These factors include the number of turns in the sense winding, magnetic permeability of the core, sensor geometry and the gated flux rate of change with respect to time. Phase synchronous detection is used to convert these harmonic signals to a DC voltage proportional to the external magnetic field.
Fluxgate magnetometers were invented in the 1930s by Victor Vacquier at Gulf Research Laboratories; Vacquier applied them during World War II as an instrument for detecting submarines, and after the war confirmed the theory of plate tectonics by using them to measure shifts in the magnetic patterns on the sea floor.

Source: Wikipedia

Electron synchrotrons I Storage rings I History I Targets and detectors

Electron synchrotrons
Circular electron accelerators fell somewhat out of favor for particle physics around the time that SLAC was constructed, because their synchrotron losses were considered economically prohibitive and because their beam intensity was lower than for the unpulsed linear machines. The Cornell Electron Synchrotron, built at low cost in the late 1960s, was the first in a series of high-energy circular electron accelerators built for fundamental particle physics, culminating in the LEP at CERN.

A large number of electron synchrotrons have been built in the past two decades, specialized to be synchrotron light sources, of ultraviolet light and X rays; see below.

Storage rings
For some applications, it is useful to store beams of high energy particles for some time (with modern high vacuum technology, up to many hours) without further acceleration. This is especially true for colliding beam accelerators, in which two beams moving in opposite directions are made to collide with each other, with a large gain in effective collision energy. Because relatively few collisions occur at each pass through the intersection point of the two beams, it is customary to first accelerate the beams to the desired energy, and then store them in storage rings, which are essentially synchrotron rings of magnets, with no significant RF power for acceleration.

Synchrotron radiation sources
Some circular accelerators have been built to deliberately generate radiation (called synchrotron light) as X-rays also called synchrotron radiation, for example the Diamond Light Source being built at the Rutherford Appleton Laboratory in England or the Advanced Photon Source at Argonne National Laboratory in Illinois, USA. High-energy X-rays are useful for X-ray spectroscopy of proteins or X-ray absorption fine structure (XAFS) for example.

Synchrotron radiation is more powerfully emitted by lighter particles, so these accelerators are invariably electron accelerators. Synchrotron radiation allows for better imaging as researched and developed at SLAC's SPEAR.

History
Lawrence's first cyclotron was a mere 4 inches (100 mm) in diameter. Later he built a machine with a 60 in dia pole face, and planned one with a 184-inch dia, which was, however, taken over for World War II-related work connected with uranium isotope separation; after the war it continued in service for research and medicine over many years.

The first large proton synchrotron was the Cosmotron at Brookhaven National Laboratory, which accelerated protons to about 3 GeV. The Bevatron at Berkeley, completed in 1954, was specifically designed to accelerate protons to sufficient energy to create anti-protons, and verify the particle-antiparticle symmetry of nature, then only strongly suspected. The Alternating Gradient Synchrotron (AGS) at Brookhaven was the first large synchrotron with alternating gradient, "strong focusing" magnets, which greatly reduced the required aperture of the beam, and correspondingly the size and cost of the bending magnets. The Proton Synchrotron, built at CERN, was the first major European particle accelerator and generally similar to the AGS.

The Fermilab Tevatron has a ring with a beam path of 4 miles (6 km). The largest circular accelerator ever built was the LEP synchrotron at CERN with a circumference 26.6 kilometers, which was an electron/positron collider. It has been dismantled and the underground tunnel is being reused for a proton collider called the LHC, due to start operation in at the end of July 2008. However, after operating for a short time, 100 of the giant superconducting magnets failed and the LHC had to shut down in September 2008.

According to a press release from CERN that was printed in Scientific American "the most likely cause of the problem was a faulty electrical connection between two magnets, which probably melted at high current leading to mechanical failure."

An apparent later report by the BBC said "On Friday, a failure, known as a quench, caused around 100 of the LHC's super-cooled magnets to heat up by as much as 100 degrees." This was later described as a "massive magnet quench".

This caused a rupture of a high magnitude – leaking one ton of liquid helium into the LHC tunnels. The liquid helium is used for more efficeint use of power and the super cooling of the liquid helium allows electrical resistance of the superconducting magnets to be nonexistent – zero ohms. According to the BBC, before the accident its operating temperature was 1.9 kelvin (-271C; -456F) – which is colder than deep space.

The aborted Superconducting Super Collider (SSC) in Texas would have had a circumference of 87 km. Construction was started in 1991, but abandoned in 1993. Very large circular accelerators are invariably built in underground tunnels a few metres wide to minimize the disruption and cost of building such a structure on the surface, and to provide shielding against intense secondary radiations that may occur. These are extremely penetrating at high energies.

Current accelerators such as the Spallation Neutron Source, incorporate superconducting cryomodules. The Relativistic Heavy Ion Collider, and Large Hadron Collider also make use of superconducting magnets and RF cavity resonators to accelerate particles.

Targets and detectors
The output of a particle accelerator can generally be directed towards multiple lines of experiments, one at a given time, by means of a deviating electromagnet. This makes it possible to operate multiple experiments without needing to move things around or shutting down the entire accelerator beam. Except for synchrotron radiation sources, the purpose of an accelerator is to generate high-energy particles for interaction with matter.
This is usually a fixed target, such as the phosphor coating on the back of the screen in the case of a television tube; a piece of uranium in an accelerator designed as a neutron source; or a tungsten target for an X-ray generator. In a linac, the target is simply fitted to the end of the accelerator. The particle track in a cyclotron is a spiral outwards from the centre of the circular machine, so the accelerated particles emerge from a fixed point as for a linear accelerator.

For synchrotrons, the situation is more complex. Particles are accelerated to the desired energy. Then, a fast acting dipole magnet is used to switch the particles out of the circular synchrotron tube and towards the target.

A variation commonly used for particle physics research is a collider, also called a storage ring collider. Two circular synchrotrons are built in close proximity – usually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear.

Higher energies
At present the highest energy accelerators are all circular colliders, but it is likely that limits have been reached in respect of compensating for synchrotron radiation losses for electron accelerators, and the next generation will probably be linear accelerators 10 times the current length. An example of such a next generation electron accelerator is the 40 km long International Linear Collider, due to be constructed between 2015-2020.

As of 2005, it is believed that plasma wakefield acceleration in the form of electron-beam 'afterburners' and standalone laser pulsers will provide dramatic increases in efficiency within two to three decades. In plasma wakefield accelerators, the beam cavity is filled with a plasma (rather than vacuum). A short pulse of electrons or laser light either constitutes or immediately trails the particles that are being accelerated. The pulse disrupts the plasma, causing the charged particles in the plasma to integrate into and move toward the rear of the bunch of particles that are being accelerated. This process transfers energy to the particle bunch, accelerating it further, and continues as long as the pulse is coherent.

Energy gradients as steep as 200 GeV/m have been achieved over millimeter-scale distances using laser pulsers[13] and gradients approaching 1 GeV/m are being produced on the multi-centimeter-scale with electron-beam systems, in contrast to a limit of about 0.1 GeV/m for radio-frequency acceleration alone. Existing electron accelerators such as SLAC could use electron-beam afterburners to greatly increase the energy of their particle beams, at the cost of beam intensity. Electron systems in general can provide tightly collimated, reliable beams; laser systems may offer more power and compactness. Thus, plasma wakefield accelerators could be used — if technical issues can be resolved — to both increase the maximum energy of the largest accelerators and to bring high energies into university laboratories and medical centres.

Black hole production and public safety concerns

In the future, the possibility of black hole (BH) production at the highest energy accelerators may arise if certain predictions of superstring theory are accurate. This and other exotic possibilities have led to public safety concerns that have been widely reported in connection with the LHC, which began operation in 2008. The various possible dangerous scenarios have been assessed as presenting "no conceivable danger" in the latest risk assessment produced by the LHC Safety Assessment Group. If they are produced, it is proposed that BHs would evaporate extremely quickly via the unconfirmed theory of Bekenstein-Hawking radiation. If colliders can produce BHs, cosmic rays (and particularly ultra-high-energy cosmic rays, UHECRs) must have been producing them for eons, but they have yet to harm us. It has been argued that to conserve energy and momentum, any BHs created in a collision between an UHECR and local matter would necessarily be produced moving at relativistic speed with respect to the Earth, and should escape into space, as their accretion and growth rate should be very slow, while BHs produced in colliders (with components of equal mass) would have some chance of having a velocity less than Earth escape velocity, 11.2 km per sec, and would be liable to capture and subsequent growth. Yet even on such scenarios the collisions of UHECRs with white dwarfs and neutron stars would lead to their rapid destruction, but these bodies are observed to be common astronomical objects. Thus if stable micro black holes should be produced, they must grow far too slowly to cause any noticeable macroscopic effects within the natural lifetime of the solar system.

Source: Wikipedia

Electron synchrotrons I Storage rings I History I Targets and detectors

Electron synchrotrons
Circular electron accelerators fell somewhat out of favor for particle physics around the time that SLAC was constructed, because their synchrotron losses were considered economically prohibitive and because their beam intensity was lower than for the unpulsed linear machines. The Cornell Electron Synchrotron, built at low cost in the late 1960s, was the first in a series of high-energy circular electron accelerators built for fundamental particle physics, culminating in the LEP at CERN.

A large number of electron synchrotrons have been built in the past two decades, specialized to be synchrotron light sources, of ultraviolet light and X rays; see below.

Storage rings
For some applications, it is useful to store beams of high energy particles for some time (with modern high vacuum technology, up to many hours) without further acceleration. This is especially true for colliding beam accelerators, in which two beams moving in opposite directions are made to collide with each other, with a large gain in effective collision energy. Because relatively few collisions occur at each pass through the intersection point of the two beams, it is customary to first accelerate the beams to the desired energy, and then store them in storage rings, which are essentially synchrotron rings of magnets, with no significant RF power for acceleration.

Synchrotron radiation sources
Some circular accelerators have been built to deliberately generate radiation (called synchrotron light) as X-rays also called synchrotron radiation, for example the Diamond Light Source being built at the Rutherford Appleton Laboratory in England or the Advanced Photon Source at Argonne National Laboratory in Illinois, USA. High-energy X-rays are useful for X-ray spectroscopy of proteins or X-ray absorption fine structure (XAFS) for example.

Synchrotron radiation is more powerfully emitted by lighter particles, so these accelerators are invariably electron accelerators. Synchrotron radiation allows for better imaging as researched and developed at SLAC's SPEAR.

History
Lawrence's first cyclotron was a mere 4 inches (100 mm) in diameter. Later he built a machine with a 60 in dia pole face, and planned one with a 184-inch dia, which was, however, taken over for World War II-related work connected with uranium isotope separation; after the war it continued in service for research and medicine over many years.

The first large proton synchrotron was the Cosmotron at Brookhaven National Laboratory, which accelerated protons to about 3 GeV. The Bevatron at Berkeley, completed in 1954, was specifically designed to accelerate protons to sufficient energy to create anti-protons, and verify the particle-antiparticle symmetry of nature, then only strongly suspected. The Alternating Gradient Synchrotron (AGS) at Brookhaven was the first large synchrotron with alternating gradient, "strong focusing" magnets, which greatly reduced the required aperture of the beam, and correspondingly the size and cost of the bending magnets. The Proton Synchrotron, built at CERN, was the first major European particle accelerator and generally similar to the AGS.

The Fermilab Tevatron has a ring with a beam path of 4 miles (6 km). The largest circular accelerator ever built was the LEP synchrotron at CERN with a circumference 26.6 kilometers, which was an electron/positron collider. It has been dismantled and the underground tunnel is being reused for a proton collider called the LHC, due to start operation in at the end of July 2008. However, after operating for a short time, 100 of the giant superconducting magnets failed and the LHC had to shut down in September 2008.

According to a press release from CERN that was printed in Scientific American "the most likely cause of the problem was a faulty electrical connection between two magnets, which probably melted at high current leading to mechanical failure."

An apparent later report by the BBC said "On Friday, a failure, known as a quench, caused around 100 of the LHC's super-cooled magnets to heat up by as much as 100 degrees." This was later described as a "massive magnet quench".

This caused a rupture of a high magnitude – leaking one ton of liquid helium into the LHC tunnels. The liquid helium is used for more efficeint use of power and the super cooling of the liquid helium allows electrical resistance of the superconducting magnets to be nonexistent – zero ohms. According to the BBC, before the accident its operating temperature was 1.9 kelvin (-271C; -456F) – which is colder than deep space.

The aborted Superconducting Super Collider (SSC) in Texas would have had a circumference of 87 km. Construction was started in 1991, but abandoned in 1993. Very large circular accelerators are invariably built in underground tunnels a few metres wide to minimize the disruption and cost of building such a structure on the surface, and to provide shielding against intense secondary radiations that may occur. These are extremely penetrating at high energies.

Current accelerators such as the Spallation Neutron Source, incorporate superconducting cryomodules. The Relativistic Heavy Ion Collider, and Large Hadron Collider also make use of superconducting magnets and RF cavity resonators to accelerate particles.

Targets and detectors
The output of a particle accelerator can generally be directed towards multiple lines of experiments, one at a given time, by means of a deviating electromagnet. This makes it possible to operate multiple experiments without needing to move things around or shutting down the entire accelerator beam. Except for synchrotron radiation sources, the purpose of an accelerator is to generate high-energy particles for interaction with matter.
This is usually a fixed target, such as the phosphor coating on the back of the screen in the case of a television tube; a piece of uranium in an accelerator designed as a neutron source; or a tungsten target for an X-ray generator. In a linac, the target is simply fitted to the end of the accelerator. The particle track in a cyclotron is a spiral outwards from the centre of the circular machine, so the accelerated particles emerge from a fixed point as for a linear accelerator.

For synchrotrons, the situation is more complex. Particles are accelerated to the desired energy. Then, a fast acting dipole magnet is used to switch the particles out of the circular synchrotron tube and towards the target.

A variation commonly used for particle physics research is a collider, also called a storage ring collider. Two circular synchrotrons are built in close proximity – usually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear.

Higher energies
At present the highest energy accelerators are all circular colliders, but it is likely that limits have been reached in respect of compensating for synchrotron radiation losses for electron accelerators, and the next generation will probably be linear accelerators 10 times the current length. An example of such a next generation electron accelerator is the 40 km long International Linear Collider, due to be constructed between 2015-2020.

As of 2005, it is believed that plasma wakefield acceleration in the form of electron-beam 'afterburners' and standalone laser pulsers will provide dramatic increases in efficiency within two to three decades. In plasma wakefield accelerators, the beam cavity is filled with a plasma (rather than vacuum). A short pulse of electrons or laser light either constitutes or immediately trails the particles that are being accelerated. The pulse disrupts the plasma, causing the charged particles in the plasma to integrate into and move toward the rear of the bunch of particles that are being accelerated. This process transfers energy to the particle bunch, accelerating it further, and continues as long as the pulse is coherent.

Energy gradients as steep as 200 GeV/m have been achieved over millimeter-scale distances using laser pulsers[13] and gradients approaching 1 GeV/m are being produced on the multi-centimeter-scale with electron-beam systems, in contrast to a limit of about 0.1 GeV/m for radio-frequency acceleration alone. Existing electron accelerators such as SLAC could use electron-beam afterburners to greatly increase the energy of their particle beams, at the cost of beam intensity. Electron systems in general can provide tightly collimated, reliable beams; laser systems may offer more power and compactness. Thus, plasma wakefield accelerators could be used — if technical issues can be resolved — to both increase the maximum energy of the largest accelerators and to bring high energies into university laboratories and medical centres.

Black hole production and public safety concerns

In the future, the possibility of black hole (BH) production at the highest energy accelerators may arise if certain predictions of superstring theory are accurate. This and other exotic possibilities have led to public safety concerns that have been widely reported in connection with the LHC, which began operation in 2008. The various possible dangerous scenarios have been assessed as presenting "no conceivable danger" in the latest risk assessment produced by the LHC Safety Assessment Group. If they are produced, it is proposed that BHs would evaporate extremely quickly via the unconfirmed theory of Bekenstein-Hawking radiation. If colliders can produce BHs, cosmic rays (and particularly ultra-high-energy cosmic rays, UHECRs) must have been producing them for eons, but they have yet to harm us. It has been argued that to conserve energy and momentum, any BHs created in a collision between an UHECR and local matter would necessarily be produced moving at relativistic speed with respect to the Earth, and should escape into space, as their accretion and growth rate should be very slow, while BHs produced in colliders (with components of equal mass) would have some chance of having a velocity less than Earth escape velocity, 11.2 km per sec, and would be liable to capture and subsequent growth. Yet even on such scenarios the collisions of UHECRs with white dwarfs and neutron stars would lead to their rapid destruction, but these bodies are observed to be common astronomical objects. Thus if stable micro black holes should be produced, they must grow far too slowly to cause any noticeable macroscopic effects within the natural lifetime of the solar system.

Source: Wikipedia

Circular or cyclic accelerators I Cyclotrons I Synchrocyclotrons and isochronous cyclotrons I Synchrotrons


Circular or cyclic accelerators
In the circular accelerator, particles move in a circle until they reach sufficient energy. The particle track is typically bent into a circle using electromagnets. The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator).

Depending on the energy and the particle being accelerated, circular accelerators suffer a disadvantage in that the particles emit synchrotron radiation. When any charged particle is accelerated, it emits electromagnetic radiation and secondary emissions. As a particle traveling in a circle is always accelerating towards the center of the circle, it continuously radiates towards the tangent of the circle. This radiation is called synchrotron light and depends highly on the mass of the accelerating particle. For this reason, many high energy electron accelerators are linacs. Certain accelerators (synchrotrons) are however built specially for producing synchrotron light (X-rays).

Since the special theory of relativity requires that matter always travels slower than the speed of light in a vacuum, in high-energy accelerators, as the energy increases the particle speed approaches the speed of light as a limit, never quite attained. Therefore particle physicists do not generally think in terms of speed, but rather in terms of a particle's energy or momentum, usually measured in electron volts (eV). An important principle for circular accelerators, and particle beams in general, is that the curvature of the particle trajectory is proportional to the particle charge and to the magnetic field, but inversely proportional to the (typically relativistic) momentum.

Cyclotrons
The earliest circular accelerators were cyclotrons, invented in 1929 by Ernest O. Lawrence at the University of California, Berkeley. Cyclotrons have a single pair of hollow 'D'-shaped plates to accelerate the particles and a single large dipole magnet to bend their path into a circular orbit. It is a characteristic property of charged particles in a uniform and constant magnetic field B that they orbit with a constant period, at a frequency called the cyclotron frequency, so long as their speed is small compared to the speed of light c. This means that the accelerating D's of a cyclotron can be driven at a constant frequency by a radio frequency (RF) accelerating power source, as the beam spirals outwards continuously. The particles are injected in the centre of the magnet and are extracted at the outer edge at their maximum energy.

Cyclotrons reach an energy limit because of relativistic effects whereby the particles effectively become more massive, so that their cyclotron frequency drops out of synch with the accelerating RF. Therefore simple cyclotrons can accelerate protons only to an energy of around 15 million electron volts (15 MeV, corresponding to a speed of roughly 10% of c), because the protons get out of phase with the driving electric field. If accelerated further, the beam would continue to spiral outward to a larger radius but the particles would no longer gain enough speed to complete the larger circle in step with the accelerating RF. Cyclotrons are nevertheless still useful for lower energy applications.

Synchrocyclotrons and isochronous cyclotrons
There are ways of modifying the classic cyclotron to increase the energy limit. This may be done in a continuous beam, constant frequency, machine by shaping the magnet poles so to increase magnetic field with radius. Then higher energy particles travel a shorter distance in each orbit than they otherwise would, and can remain in phase with the accelerating field. Such machines are called isochronous cyclotrons. Their advantage is that they can deliver continuous beams of higher average intensity, which is useful for some applications. The main disadvantages are the size and cost of the large magnet needed, and the difficulty in achieving the higher field required at the outer edge.

Another possibility, the synchrocyclotron, accelerates the particles in bunches, in a constant B field, but reduces the RF accelerating field's frequency so as to keep the particles in step as they spiral outward. This approach suffers from low average beam intensity due to the bunching, and again from the need for a huge magnet of large radius and constant field over the larger orbit demanded by high energy.

Betatrons
Another type of circular accelerator, invented in 1940 for accelerating electrons, is the Betatron. These machines, like synchrotrons, use a donut-shaped ring magnet (see below) with a cyclically increasing B field, but accelerate the particles by induction from the increasing magnetic field, as if they were the secondary winding in a transformer, due to the changing magnetic flux through the orbit. Achieving constant orbital radius while supplying the proper accelerating electric field requires that the magnetic flux linking the orbit be somewhat independent of the magnetic field on the orbit, bending the particles into a constant radius curve. These machines have in practice been limited by the large radiative losses suffered by the electrons moving at nearly the speed of light in a relatively small radius orbit.

Synchrotrons
To reach still higher energies, with relativistic mass approaching or exceeding the rest mass of the particles (for protons, billions of electron volts GeV), it is necessary to use a synchrotron. This is an accelerator in which the particles are accelerated in a ring of constant radius. An immediate advantage over cyclotrons is that the magnetic field need only be present over the actual region of the particle orbits, which is very much narrower than the diameter of the ring. (The largest cyclotron built in the US had a 184 in dia magnet pole, whereas the diameter of the LEP and LHC is nearly 10 km. The aperture of the beam of the latter is of the order of centimeters.)

However, since the particle momentum increases during acceleration, it is necessary to turn up the magnetic field B in proportion to maintain constant curvature of the orbit. In consequence synchrotrons cannot accelerate particles continuously, as cyclotrons can, but must operate cyclically, supplying particles in bunches, which are delivered to a target or an external beam in beam "spills" typically every few seconds.

Since high energy synchrotrons do most of their work on particles that are already traveling at nearly the speed of light c, the time to complete one orbit of the ring is nearly constant, as is the frequency of the RF cavity resonators used to drive the acceleration.

Note also a further point about modern synchrotrons: because the beam aperture is small and the magnetic field does not cover the entire area of the particle orbit as it does for a cyclotron, several necessary functions can be separated. Instead of one huge magnet, one has a line of hundreds of bending magnets, enclosing (or enclosed by) vacuum connecting pipes. The design of synchrotrons was revolutionized in the early 1950s with the discovery of the strong focusing concept. The focusing of the beam is handled independently by specialized quadrupole magnets, while the acceleration itself is accomplished in separate RF sections, rather similar to short linear accelerators. Also, there is no necessity that cyclic machines be circular, but rather the beam pipe may have straight sections between magnets where beams may collide. be cooled, etc. This has developed into an entire separate subject, called "beam physics" or "beam optics".

More complex modern synchrotrons such as the Tevatron, LEP, and LHC may deliver the particle bunches into storage rings of magnets with constant B, where they can continue to orbit for long periods for experimentation or further acceleration. The highest-energy machines such as the Tevatron and LHC are actually accelerator complexes, with a cascade of specialized elements in series, including linear accelerators for initial beam creation, one or more low energy synchrotrons to reach intermediate energy, storage rings where beams can be accumulated or "cooled" (reducing the magnet aperture required and permitting tighter focusing; see beam cooling), and a last large ring for final acceleration and experimentation.

Source: Wikipedia

Particle accelerator I Uses of particle accelerators I Linear particle accelerators I Tandem electrostatic accelerators

Particle accelerator
A particle accelerator (or atom smasher) is a device that uses electric fields to propel electrically-charged particles to high speeds and to contain them in well-defined beams. An ordinary CRT television set is a simple form of accelerator. There are two basic types: linear accelerators and circular accelerators.

Uses of particle accelerators
Beams of high-energy particles are useful for both fundamental and applied research in the sciences. For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and anti-protons, interacting with each other or with the simplest nuclei (eg, hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon. At lower energies, beams of accelerated nuclei are also used in medicine, as for the treatment of cancer.

Besides being of fundamental interest, high energy electrons may be coaxed into emitting extremely bright and coherent beams of high energy photons – ultraviolet and X ray – via synchrotron radiation, which photons have numerous uses in the study of atomic structure, chemistry, condensed matter physics, biology, and technology. Examples include the ESRF in Europe, which has recently been used to extract detailed 3-dimensional images of insects trapped in amber. Thus there is a great demand for electron accelerators of moderate (GeV) energy and high intensity.

Low-energy machines
Everyday examples of particle accelerators are cathode ray tubes found in television sets and X-ray generators. These low-energy accelerators generators use a single pair of electrodes with a DC voltage of a few thousand volts between them. In an X-ray generator, the target itself is one of the electrodes. A low-energy particle accelerator called an ion implanter is used in the manufacture of integrated circuits.

High-energy machines
DC accelerator types capable of accelerating particles to speeds sufficient to cause nuclear reactions are Cockcroft-Walton generators or voltage multipliers, which convert AC to high voltage DC, or Van de Graaff generators that use static electricity carried by belts.
The largest and most powerful particle accelerators, such as the RHIC, the Large Hadron Collider (LHC) (scheduled to start operation in September 2009) and the Tevatron, are used for experimental particle physics.

Particle accelerators can also produce proton beams, which can produce "proton-heavy" medical or research isotopes as opposed to the "neutron-heavy" ones made in fission reactors. An example of this type of machine is LANSCE at Los Alamos.

Linear particle accelerators
In a linear accelerator (linac), particles are accelerated in a straight line with a target of interest at one end. Linacs are very widely used – every cathode ray tube contains one. They are also used to provide an initial low-energy kick to particles before they are injected into circular accelerators. The longest linac in the world is the Stanford Linear Accelerator, SLAC, which is 3 km (2 miles) long. SLAC is an electron-positron collider.

Linear high-energy accelerators use a linear array of plates (or drift tubes) to which an alternating high-energy field is applied. As the particles approach a plate they are accelerated towards it by an opposite polarity charge applied to the plate. As they pass through a hole in the plate, the polarity is switched so that the plate now repels them and they are now accelerated by it towards the next plate. Normally a stream of "bunches" of particles are accelerated, so a carefully controlled AC voltage is applied to each plate to continuously repeat this process for each bunch.

As the particles approach the speed of light the switching rate of the electric fields becomes so high that they operate at microwave frequencies, and so RF cavity resonators are used in higher energy machines instead of simple plates.

Linear accelerators are also widely used in medicine, for radiotherapy and radiosurgery. Medical grade LINACs accelerate electrons using a klystron and a complex bending magnet arrangement which produces a beam of 6-30 million electron-volt (MeV) energy. The electrons can be used directly or they can be collided with a target to produce a beam of X-rays. The reliability, flexibility and accuracy of the radiation beam produced has largely supplanted the older use of Cobalt-60 therapy as a treatment tool.

Tandem electrostatic accelerators
In a tandem accelerator, the negatively charged ion gains energy by attraction to the very high positive voltage at the geometric centre of the pressure vessel. When it arrives at the centre region known as the high voltage terminal, some electrons are stripped from the ion. The ion then becomes positive and accelerated away by the high positive voltage. Thus, this type of accelerator is called a 'tandem' accelerator. The accelerator has two stages of acceleration, first pulling and then pushing the charged particles. An example of a tandem accelerator is ANTARES (Australian National Tandem Accelerator for Applied Research).

Source: Wikipedia

Senin, 15 Juni 2009

Teoría BCS I Teoría Ginzburg-Landau I Clasificación I Aplicaciones

Teoría BCS

La teoría microscópica más aceptada para explicar los superconductores es la Teoría BCS, presentada en 1957. La superconductividad se puede explicar como una aplicación del Condensado de Bose-Einstein. Sin embargo, los electrones son fermiones, por lo que no se les puede aplicar esta teoría directamente. La idea en la que se basa la teoría BCS es que los electrones se aparean formando un par de fermiones que se comporta como un bosón. Esta pareja se denomina par de Cooper y su enlace está justificado en las interacciones de los electrones entre sí mediada por la estructura cristalina del material.

Teoría Ginzburg-Landau

Otro enfoque diferente es mediante la Teoría Ginzburg-Landau, que se centra más en las propiedades macroscópicas que en la teoría microscópica, basándose en la ruptura de simetrías en la transición de fase.

Esta teoría predice mucho mejor las propiedades de sustancias inhomogéneas, ya que la teoría BCS es aplicable únicamente si la sustancia es homogénea, es decir, si la energía de la banda prohibida es constante en el espacio. Cuando la sustancia es inhomogénea, el problema puede ser intratable desde el punto de vista microscópico.

La teoría se fundamenta en un cálculo variacional en el que se trata de minimizar la energía libre de Helmholz con respecto a la densidad de electrones que se encuentran en el estado superconductor. Las condiciones para aplicar la teoría son

  • las temperaturas manejadas tienen que estar cerca de la temperatura crítica, dado que se fundamenta en un desarrollo en serie de Taylor alrededor de Tc.
  • La pseudofunción de onda Ψ, así como el potencial vector \vec{A}, tienen que variar suavemente.

Esta teoría predice dos longitudes características:

  • longitud de penetración: es la distancia que penetra el campo magnético en el material superconductor
  • longitud de coherencia: es el tamaño aproximado del par de Cooper

Clasificación

Los superconductores se pueden clasificar en función de:

  • Su comportamiento físico, pueden ser de tipo I (con un cambio brusco de una fase a otra, o en otras palabras, si sufre un cambio de fase de primer orden) o de tipo II (si pasan por un estado mixto en que conviven ambas fases, o dicho de otro modo, si sufre un cambio de fase de segundo orden).
  • La teoría que los explica, llamándose convencionales (si son explicados por la teoría BCS) o no convencionales (en caso contrario).
  • Su temperatura crítica, siendo de alta temperatura (generalmente se llaman así si se puede alcanzar su estado conductor enfriándolos con nitrógeno líquido, es decir, si Tc > 77K), o de baja temperatura (si no es así).

Aplicaciones

Los imanes superconductores son algunos de los electroimanes más poderosos conocidos. Se utilizan en los trenes maglev, en máquinas para la resonancia magnética nuclear en hospitales y en el direccionamiento del haz de un acelerador de partículas. También pueden utilizarse para la separación magnética, en donde partículas magnéticas débiles se extraen de un fondo de partículas menos o no magnéticas, como en las industrias de pigmentos.

Los superconductores se han utilizado también para hacer circuitos digitales y filtros de radiofrecuencia y microondas para estaciones base de telefonía móvil.

Los superconductores se usan para construir uniones Josephson, que son los bloques de construcción de los SQUIDs (dispositivos superconductores de interferencia cuántica), los magnetómetros conocidos más sensibles. Una serie de dispositivos Josephson se han utilizado para definir el voltio en el sistema internacional (SI). En función de la modalidad de funcionamiento, una unión Josephson se puede utilizar como detector de fotones o como mezclador. El gran cambio en la resistencia a la transición del estado normal al estado superconductor se utiliza para construir termómetros en detectores de fotones criogénicos.

Están apareciendo nuevos mercados donde la relativa eficiencia, el tamaño y el peso de los dispositivos basados en los superconductores de alta temperatura son superiores a los gastos adicionales que ellos suponen.

Aplicaciones futuras prometedoras incluyen transformadores de alto rendimiento, dispositivos de almacenamiento de energía, la transmisión de energía eléctrica, motores eléctricos (por ejemplo, para la propulsión de vehículos, como en vactrains o trenes maglev) y dispositivos de levitación magnética. Sin embargo la superconductividad es sensible a los campos magnéticos en movimiento de modo que las aplicaciones que usan corriente alterna (por ejemplo, los transformadores) serán más difícil de elaborar que las que dependen de corriente continua.

source: wikipedia

Historia de la superconductividad I Las teorías principales I Los superconductores de alta temperatura I

Historia de la superconductividad

El descubrimiento

Ya en el siglo XIX se llevaron a cabo diversos experimentos para medir la resistencia eléctrica a bajas temperaturas, siendo James Dewar el primer pionero en este campo.

Sin embargo, la superconductividad como tal no se descubriría hasta 1911, año en que el físico holandés Heike Kamerlingh Onnes observó que la resistencia eléctrica del mercurio desaparecía bruscamente al enfriarse a 4 K (-269 °C), cuando lo que se esperaba era que disminuyera gradualmente hasta el cero absoluto. Gracias a sus descubrimientos, principalmente por su método para lograr la producción de helio líquido, recibiría dos años más tarde el premio Nobel de física. Durante los primeros años el fenómeno fue conocido como supraconductividad.

En 1913 se descubre que un campo magnético suficientemente grande también destruye el estado superconductor, descubriéndose tres años después la existencia de una corriente eléctrica crítica.

Puesto que se trata de un fenómeno esencialmente cuántico, no se hicieron grandes avances en la comprensión de la superconductividad, puesto que la comprensión y las herramientas matemáticas de que disponían los físicos de la época no fueron suficientes para afrontar el problema hasta los años cincuenta. Por ello, la investigación fue hasta entonces meramente fenomenológica, como por ejemplo el descubrimiento del efecto Meissner en 1933 y su primera explicación mediante el desarrollo de la ecuación de London dos años más tarde por parte de los hermanos Fritz y Heinz London.

Las teorías principales

Los mayores avances en la comprensión de la superconductividad tuvieron lugar en los años cincuenta: en 1950 es publicada la teoría Ginzburg-Landau, y en 1957 vería la luz la teoría BCS.

La teoría BCS fue desarrollada por Bardeen, Cooper y Schrieffer (de sus iniciales surge el nombre BCS), gracias a lo cual los tres recibirían el premio Nobel de física en 1972. Esta teoría se pudo desarrollar gracias a dos pistas fundamentales ofrecidas por físicos experimentales a principios de los años cincuenta:

  • el descubrimiento del efecto isotópico en 1950 (que vinculó la superconductividad con la red cristalina),
  • y el descubrimiento de Lars Onsager en 1953 de que los portadores de carga son en realidad parejas de electrones llamados pares de Cooper (resultado de experimentos sobre la cuantización flujo magnético que pasa a través de un anillo superconductor).

La teoría Ginzburg-Landau es una generalización de la teoría de London desarrollada por Vitaly Ginzburg y Lev Landau en 1950.[2] Si bien esta teoría precede siete años a la teoría BCS, los físicos de Europa Occidental y Estados Unidos le prestaron poca atención por su carácter más fenomenológico que teórico, unido a la incomunicación de aquellos años entre ambos lados del Telón de Acero. Esta situación cambió en 1959, año en que Lev Gor'kov demostró que se podía derivar rigurosamente a partir de la teoría microscópica[3] en un artículo que también publicó en inglés.[4]

En 1962 Brian David Josephson predijo que podría haber corriente eléctrica entre dos conductores incluso si hubiera una pequeña separación entre estos, debido al efecto túnel. Un año más tarde Anderson y Rowell lo confirmaron experimentalmente. El efecto sería conocido como efecto Josephson, y está entre los fenómenos más importantes de los superconductores, teniendo gran variedad de aplicaciones, desde la magnetoencefalografía hasta la predicción de terremotos.

Los superconductores de alta temperatura

Tras algunos años de relativo estancamiento, en 1986 Bednorz y Müller descubrieron que una familia de materiales cerámicos, los óxidos de cobre con estructura de perovsquita, eran superconductores con temperaturas críticas superiores a 90 kelvin. Estos materiales, conocidos como superconductores de alta temperatura, estimularon un renovado interés en la investigación de la superconductividad. Como tema de la investigación pura, estos materiales constituyen un nuevo fenómeno que no se explica por las teorías actuales. Y, debido a que el estado superconductor persiste hasta temperaturas más manejables, superiores al punto de ebullición del nitrógeno líquido, muchas aplicaciones comerciales serían viables, sobre todo si se descubrieran materiales con temperaturas críticas aún mayores.

Obtención de materiales superconductores

Debido a las bajas temperaturas que se necesitan para conseguir la superconductividad, los materiales más comunes se suelen enfriar con helio líquido. El montaje necesario es complejo y costoso, utilizándose en muy contadas aplicaciones como, por ejemplo, la construcción de electroimanes muy potentes para resonancia magnética nuclear.

Sin embargo, en los años 80 se descubrieron los superconductores de alta temperatura, que muestran la transición de fase a temperaturas superiores a la transición líquido-vapor del nitrógeno líquido. Esto ha abaratado mucho los costos en el estudio de estos materiales y abierto la puerta a la existencia de materiales superconductores a temperatura ambiente, lo que supondría una revolución en la industria del siglo XXI. La mayor desventaja de estos materiales es su composición cerámica, lo que lo hace poco apropiado para fabricar cables mediante deformación plástica, el uso más obvio de este tipo de materiales. Sin embargo se han desarrollado técnicas nuevas para la fabricación de cintas como IBAD (deposición asistida mediante haz de iones). Mediante esta técnica se han logrado cables de longitudes mayores de 1 kilómetro.

Teoría

Si bien el fenómeno de la superconductividad es un tema abierto en física, en la actualidad hay dos enfoques fundamentales: el microscópico o mecano cuántico (basado en la teoría BCS) y el macroscópico o fenomenológico (en el cual se centra la teoría Ginzburg-Landau).

Un superconductor no es simplemente un conductor normal perfecto

Al contrario de lo que se podría pensar en principio, un superconductor se comporta de un modo muy distinto a los conductores normales: no se trata de un conductor cuya resistencia es cercana a cero, sino que la resistencia es exactamente igual a cero. Esto no se puede explicar mediante los modelos empleados para los conductores habituales, como por ejemplo el modelo de Drude.

Para demostrar esto vamos a suponer la hipótesis opuesta: imaginemos por un momento que un superconductor se comporta como un conductor normal. En tal caso, tendríamos que los electrones son esparcidos de alguna manera y su ecuación del movimiento sería

m\frac{d}{d t}\langle\vec{v}\rangle = -e\vec{E}

donde \langle\vec{v}\rangle es la velocidad media de los electrones, m su masa, e su carga y \vec{E} el campo eléctrico en el que se mueven. Suponiendo que dicho campo varía suavemente, al resolverla llegaríamos a la ley de Ohm:

\vec{J}=\sigma\vec{E}=\frac{ne^2\tau}{m}\vec{E}

donde \vec{J} es la densidad de corriente, σ la conductividad eléctrica, τ el tiempo entre colisiones, y n la densidad de electrones.

Ahora bien, si suponemos que la resistencia tiende a cero, tendríamos que la conductividad tiende a infinito y por lo tanto el tiempo entre colisiones, τ, tendería a infinito. Dicho de otra manera, no habría colisiones en absoluto. Esta es la idea de cómo se comportaría un conductor normal que tuviera resistencia nula. Sin embargo, esto significaría que, puesto que la densidad de corriente no puede ser infinita, la única posibilidad es que el campo eléctrico sea nulo:

\vec{E}=0

No obstante, teniendo en cuenta la ley de Faraday, un campo eléctrico nulo implica que el campo magnético ha de ser constante:

\nabla \times \vec{E} = -\frac{\partial \vec{B}} {\partial t} = 0 \rightarrow \vec{B}(t) = constante

pero esto entra en contradicción con el efecto Meissner, de modo que la superconductividad es un fenómeno muy diferente a la que implicaría una "conductividad perfecta", y requiere una teoría diferente que los explique.

source: wikipedia