Abstract:
Nanomaterials are at the cutting edge of the rapidly developing area of nanotechnology. The potential for Nanoparticles in cancer drug delivery is infinite with novel new applications constantly being explored. Multifunctional Nanoparticles play a very significant role in cancer drug delivery. The promising implications of these platforms for advances in cancer diagnostics and therapeutics form the basis of this review.
The paper exhibits recent advances in cancer drug delivery. Cancer has a physiological barrier like vascular endothelial pores, heterogeneous blood supply, heterogeneous architecture etc. For a treatment to be successful, it is very important to get over these barriers. Nanoparticles have attracted the attention of scientists because of their multifunctional character. The treatment of cancer using targeted drug delivery Nanoparticles is the latest achievement in the medical field. Various Nanodevices can be used with out any side effects. They mainly include Cantilevers, Nanopores, Nanotubes, Quantum Dots (QDs), Nanoshells, Dendrimers and Biodegradable Hydrogels.
The paper mainly represents all the possible ways of treatment of cancer using various Nano devices.
Introduction to Nanoparticles:
A nanometer is one-billionth of a meter (10-9 m); a sheet of paper is about 100,000 nanometers thick. These nanoparticles give us the ability to see cells and molecules that we otherwise cannot detect through conventional imaging. The ability to pick up what happens in the cell, to monitor therapeutic intervention and to see when a cancer cell is mortally wounded or is actually activated is critical for the successful diagnosis and treatment of this disease.
For drug delivery in cancer we have “Nano scale devices”. Nanoscale devices are 102 to 104 times smaller than human cells but are similar in size to large biomolecules such as enzymes and receptors. Nanoscale devices smaller than 50 nm can easily enter most cells, and those smaller than 20 nm can move out of blood vessels as they circulate through the body. Nanodevices are suitable to serve as customized, targeted drug delivery vehicles to carry large doses of chemotherapeutic agents or therapeutic genes into malignant cells while sparing healthy cells.
Nanoscale constructs can serve as customizable, targeted drug delivery vehicles capable of ferrying large doses of chemotherapeutic agents or therapeutic genes into malignant cells while sparing healthy cells, greatly reducing or eliminating the often unpalatable side effects that accompany many current cancer therapies. Several nanotechnological approaches have been used to improve delivery of chemotherapeutic agents to cancer cells with the goal of minimizing toxic effects on healthy tissues while maintaining antitumor efficacy. Some nanoscale delivery devices, such as dendrimers, silica-coated micelles, ceramic nanoparticles, and cross linked liposomes can be targeted to cancer cells. These increase the selectivity of drugs towards cancer cells can and will reduce the toxicity to normal tissue.
Types of Nanoparticles
Inorganic nanoparticles
Organic nanoparticles
Liposomes, dendrimers, carbon nanotubes, emulsions, and other polymers are a large and well-established group of organic particles. Use of these organic nanoparticles has already produced exciting results. Liposomes are being used as vehicles for drug delivery in different human tumours, including breast cancer. Most inorganic nanoparticles share the same basic structure. This consists of a central core that defines the fluorescence, optical, magnetic, and electronic properties of the particle, with a protective organic coating on the surface. This outside layer protects the core from degradation in a physiologically aggressive environment and can form electrostatic or covalent bonds, or both, with positively charged agents and biomolecules that have basic functional groups such as amines and thiols. Several research groups have successfully linked fluorescent nanoparticles to peptides, proteins, and oligonucleotides.
Important characteristics
Size
Encapsulation Efficiency
Zeta potential (surface charge)
Release characteristics.
Types of Nanoparticles
Multifunctional Nanoparticles
Lipid/Polymer Nanoparticles
Gold / Magnetic Nanoparticles
Virus Based Nanoparticles
Dry Powder Aerosol
Nanomedicine
One of the main goals of Nanomedicine is to create medically useful nanodevices that can function inside the body. Additionally, Nanomedicine will have an impact on the key challenges in cancer therapy such as localized drug delivery and specific targeting. Among the recently developed Nanomedicine and nanodevices, quantum dots, Nan wires, nanotubes, nanocantilevers, Nanopores, Nanoshells and nanoparticles are potentially the most useful for treating different types of cancer. Nanoparticles can be in the form of Nan spheres nanocapsules. Nanomedicines are a recent off-shoot of the application of nanotechnology to medical and pharmaceutical challenges, but have in fact been around for much longer in the guise of drug delivery systems. Nanomedicines that facilitate uptake and transport of therapeutically active molecules (‘delivery systems’) tend to be based on supramolecular assemblies of drug and functional carrier materials. The use of nanomedicines facilitates the creation of dose differentials between the site of the disease and the rest of the body, thus maximizing the therapeutic effect while minimizing non-specific side-effects.
Drug Delivery for Cancer Treatment
Core features of cancer cell
Abnormal growth control
Improved cell survival
Abnormal differentiation
Unlimited replicated potential.
Transport of an anticancer drug in interestium will be governed by physiological and physiochemical properties of the interestium and by the physiochemical properties of molecules itself. Thus, to deliver therapeutic agents to tumour cells in vivo, one must overcome the following problems:
Drug resistance at the tumour level due to physiological barriers
Drug resistance at the cellular level
Distribution, biotransformation and clearance of anticancer drugs in the body. Direct Introduction of anticancer drugs into tumour
Injection Directly into the tumour
Tumour necrosis therapy
Injection into the arterial blood supply of cancer
Local injection into the tumour for radiopotentiation
Localized delivery of anticancer drugs by electroporation (Electro chemotherapy)
Local delivery by anticancer drugs implants
Routes of Drug delivery
Intraperitoneal
Intrathecal
Nasal
Oral
Pulmonary inhalation
Subcutaneous injection or implant
Transdermal drug delivery
Vascular route: intravenous, intra-arterial
Systematic delivery targeted to tumour
Heat-activated targeted drug delivery
Tissue-selective drug delivery for cancer using carrier-mediated transport systems
Tumour-activated prodrug therapy for targeted delivery of chemotherapy
Pressure-induced filtration of drug across vessels to tumour
Promoting selective permeation of the anticancer agent into the tumour
Two-step targeting using biospecific antibody
Site-specific delivery and light-activation of anticancer proteins
Drug delivery targeted to blood vessels of tumour
Antiangiogenesis therapy
Angiolytic therapy
Drugs to induce clotting in blood vessels of tumour
Vascular targeting agents
Special formulations and carriers of anticancer drugs
Albumin based drug carriers
Carbohydrate-enhanced chemotherapy
Delivery of proteins and peptides for cancer therapy
Fatty acids as targeting vectors linked to active drugs
Microspheres
Monoclonal antibodies
Nanoparticles
Paginated liposomes (enclosed in a polyethylene glycol belayed)
Polyethylene glycol (PEG) technology
Single-chain antigen-binding technology
Transmembrane drug delivery to intracellular targets
Cytoporter
Receptor-mediated endocytosis
Transduction of proteins and Peptides
Vitamins as carriers for anticancer agents
Antisense therapy
Cell therapy
Gene therapy
Genetically modified bacteria
Oncolytic viruses
RNA interference Biological Therapies
Pathways of Nanoparticles in Cancer drug delivery
Nanotechnology has tremendous potential to make an important contribution in cancer prevention, detection, diagnosis, imaging and treatment. It can target a tumor, carry imaging capability to document the presence of tumor, sense pathophysiological defects in tumor cells, deliver therapeutic genes or drugs based on tumor characteristics, respond to external triggers to release the agent and document the tumor response and identify residual tumor cells.
Nanoparticles are important because of their nanoscale structure but nanoparticles in cancer are still bigger than many anticancer drugs. Their “large” size can make it difficult for them to evade organs such as the liver, spleen, and lungs, which are constantly clearing foreign materials from the body. In addition, they must be able to take advantage of subtle differences in cells to distinguish between normal and cancerous tissues. Indeed, it is only recently that researchers have begun to successfully engineer nanoparticles that can effectively evade the immune system and actively target tumours. Active tumor targeting of nanoparticles involves attaching molecules, known collectively as ligands to the outsides of nanoparticles. These ligands are special in that they can recognize and bind to complementary molecules, or receptors, found on the surface of tumor cells. When such targeting molecules are added to a drug delivery Nanoparticles, more of the anticancer drug finds and enters the tumor cell, increasing the efficacy of the treatment and reducing toxic effects on surrounding normal tissues. Characteristic Nanoparticles Used for Drug Delivery in Cancer Treatment
Structure Size Role in drug delivery
Carbon magnetic Nanoparticles 40-50 nm For drug delivery and targeted cell destruction
Ceramics Nanoparticles 1-20 nm Holding therapeutics substances such as DNA in their cavities
Dendrimers ~ 35 nm Accumulate in the tumor tissue and allow the drug to act as sensitizer for photodynamics therapy without being released
Chitosan nanoparticles 110-180 nm
High encapsulation efficiency. In vitro release studies show a burst effect flowed by a slow and continuous release.
Liposomes 25-50 nm A new generation of liposomes that incorporate fullerenes to deliver drug that are not water soluble ,that tend to have large molecules
Low density lipoprotein 20-25 nm Drug solubilized in the lipid core or attached to the surface
Nanoemulsions 20-25 nm Drug in oil/or in liquid phases to improve absorption
Nanolipispheres 25-50 nm Carrier incorporation of lipophilic and hydrophilic drugs
Nanoparticles composites ~ 40 nm Attached to guiding molecules such as Mabs for targeted drug delivery
Nanoparticles 25-200 nm Act as continuous matrices containing dispersed or dissolved drug
Nanopill/micelle 20-45 nm Made for two polymer molecules-one water-repellent and the other hydrophobic that self assemble into a sphere called a micelle that can deliver drugs to specific structures within the cell
Nanospheres 50-500 nm Hollow ceramic Nanospheres created by ultrasound
Nanovesicles 25-3000 nm Single or multilamellar bilayer spheres containing the drugs in lipids
Polymer nanocapsules 50-200 nm Used for enclosing drugs
.
The Role of Nanoparticles in Cancer Drug Delivery
Cancer disease challenging nanoparticles may be defined as being submicronic (< 1 µm) colloidal systems generally, but not necessarily, made of polymers (biodegradable or not). According to the process used for the preparation of the nanoparticles, Nanospheres or nanocapsules can be obtained. Unlike Nanospheres (matrix systems in which the drug is dispersed throughout the particles), nanocapsules are vesicular systems in which the drug is confined to an aqueous or oily cavity surrounded by a single polymeric membrane. Nanocapsules may, thus, be considered as a ‘reservoir’ system. If designed appropriately, it may act as a drug vehicle able to target tumor tissues or cells, to a certain extent, while protecting the drug from premature inactivation during its transport. Indeed, at the tumor level, the accumulation mechanism of intravenously injected nanoparticles relies on a passive diffusion or convection across the leaky, hyperpermeable tumor vasculature.
The uptake can also result from a specific recognition in the case of ligand decorated nanoparticles (‘active targeting’). Understanding and experience from other technologies such as Nanotechnology, Advanced Polymer Chemistry, and Electronic Engineering, are being brought together in developing novel methods of drug delivery. The current focus in development of cancer therapies is on targeted drug delivery to provide therapeutic concentrations of anticancer agents at the site of action and to spare the normal tissues. Cancer drug delivery is no longer simply packaging the drug in new formulations for unlike routes of delivery. Targeted drug delivery to tumours can increase the selectivity for killing cancer cells, decrease the peripheral/systemic toxicity and can permit a dose escalation. So targeted drug delivery will be more advantageous. These days drug delivery using micro/Nano particles have been shown to have great potentials for achieving controlled and targeted therapeutic effects.
The carrier particles have specific transportation and extravasation behaviours determined by their chemical structure, size, and surface properties etc. These characteristics are vital for the pharmacokinetics and pharmacodynamics of drugs being carried. To reach cancer cells in a tumor, a blood borne therapeutic molecule or cell must make its ways into the blood vessels of the tumor and across the vessel wall into interstitium, and finally migrate through the interstitium. For a molecule of given size, charge, and configuration, each of these transport processes may involve diffusion and convection. In the year 2002, there was a very fascinating article published in Science entitled “Nanoparticles Cut Tumours’ Supply Lines”.
In which, hungry tumours need new blood vessels for sustenance to deliver the goods. Cancer researchers have spent years working to starve tumours by blocking this blood vessel growth, or angiogenesis, with mixed success. The researchers packed a tiny particle with a gene that forces blood vessel cells to self-destruct, then they “mailed” the particle to blood vessels feeding tumours in mice. This is the latest achievement in the field of cancer treatment which is giving new hope for cancer patients who are suffering from angiogenesis. Targeted drug delivery is an invaluable need in pharmacology. Such an approach is particularly important in tumor therapy as the compounds are very toxic, and if they act on cells other than tumor cells, severe side effects are encountered. Any means that enables the increase of the ratio of the drug, which is delivered to the target site, will help to reduce such side effects.
Nanodevices: Detection and Cure
“Smart” dynamic nanoplatforms have the potential to change the way cancer is diagnosed, treated, and prevented. There are two basic approaches for creating nanodevices. Scientists refer to these methods as the top-down approach and the bottom-up approach. The top-down approach involves melding or etching materials into smaller components. This approach has traditionally been used in making parts for computers and electronics. The bottom-up approach involves assembling structures atom-by-atom or molecule-by-molecule, and may prove useful in manufacturing devices used in medicine. Most animal cells are 10,000 to 20,000 nanometers in diameter. This means that nanoscale devices (less than 100 nanometers) can enter cells and the organelles inside them can interact with DNA and proteins.
The nanodevices includes
Cantilevers
Nanopores
Nanotubes
Quantum Dots (QDs)
Nanoshells
Dendrimers
Biodegradable Hydrogels
Future Herbal Nanoparticles for Cancer
Tools developed through nanotechnology may be able to detect disease in a very small amount of cells or tissue. They may also be able to enter and monitor cells within a living body. In order to successfully detect cancer at its earliest stages; scientists must be able to detect molecular changes even when they occur only in a small percentage of cells. This means the necessary tools must be extremely sensitive. The potential for nanostructures to enter and analyse single cells suggests they could meet this need.
Conclusion
Nanotechnology is definitely a medical boon for diagnosis, treatment and prevention of cancer disease. It will radically change the way we diagnose, treat and prevent cancer to help meet the goal of eliminating suffering and death from cancer. Although most of the technologies described are promising and fit well with the current methods of treatment, there is still safety concerns associated with the introduction of Nanoparticles in the human body. These will require further studies before some of the products can be approved. The most promising methods of drug delivery in cancer will be those that combine diagnostics with treatment. These will enable personalized management of cancer and provide an integrated protocol for diagnosis and follow up that is so important in management of cancer patients. There are still many advances needed to improve Nanoparticles for treatment of cancers. Future efforts will focus on identifying the mechanism and location of action for the vector and determining the general applicability of the vector to treat all stages of tumours in preclinical models. Further studies are focused on expanding the selection of drugs to deliver novel Nanoparticles vectors. Hopefully, this will allow the development of innovative new strategies for cancer cures.
Sunday, April 25, 2010
NEW DISTRIBUTED QUERY OPTIMIZATION TECHNIQUES
ABSTRACT:
TOPIC: new query processing and optimization techniques in distributed database
Now-a-days data is distributed across the networks to make total world as a global village. Distributed database management systems (DDBMS) are amongst the most important and successful software developments in this decade. They are enabling computing power and data to be placed within the user environment close to the point of user activities. The performance efficiency of DDBMS is deeply related to the query processing and optimization strategies involving data transmission over different nodes through the network. Most real-world data is not well structured. Today's databases typically contain much non-structured data such as text, images, video, and audio, often distributed across computer networks. This situation demands new query processing, optimization techniques in distributed database environment. These techniques will provide efficient performance in optimization of query processing strategies in a distributed databases environment. this paper mainly focuses distributed query optimization problem
The main contents of this paper are:
• Introduction to Distributed database, Query processing and Query optimization strategies.
• Distributed Query Processing Methodology.
• Distributed Query Optimization.
• New query optimization techniques in distributed database.
• Distributed Query Optimization problems and some solutions.
• Advantages of query optimization techniques in distributed database.
• Conclusion.
Introduction:
Distributed database:
Distributed database is a database that is under the control of a central database management system (DBMS) in which storage devices are not all attached to a common CPU. It may be stored in multiple computers located in the same physical location, or may be dispersed over a network of interconnected computers. Collections of data (e.g. in a database) can be distributed across multiple physical locations.
Query processing :
Query processing is defined as the activities involved in parsing, validating, optimizing and executing a query.
The main aim of query processing is Transform query written in high-level language (e.g. SQL), into correct and efficient execution strategy expressed in low-level language (implementing Relational Algebra) and to find information in one or more databases and deliver it to the user quickly and efficiently.
High level user query -> Query Processor ->low-level data manipulation commands
Query optimization :
Query optimization is defined as the activity of choosing an efficient execution strategy for processing a query. Query optimization is a part of query processing.
The main aims of query optimization are to choose a transformation that minimizes resource usage, Reduce total execution time of query and also reduce response time of query
Distributed Query Processing Methodology:
Distributed query processing contains four stages which are query decomposition, data localization, global optimization and local optimization.
• Query decomposition: in this stage we are giving Calculus Query as an input and we are getting output as Algebraic Query. This stage is again divided in four stages they are Normalization, Analysis, Simplification and Restructuring
• Data localization: in this stage Algebraic query on distributed relations is input and fragment query is output. In this stage fragment involvement is determined.
• Global optimization: in this stage Fragment Query is input and optimized fragment query is output. Finding best global schedule is done in this stage.
• Local optimization: Best global execution schedule is input and localized optimization queries are output in this stage. it contain two sub stages they are Select the best access path ,Use the centralized optimization techniques.
Distributed Query Optimization:
Distributed query optimization is defined as finding efficient execution strategy path in distributed networks.
Query optimization is difficult in distributed environment.
There are three components of distributed query optimization they are Access Method, Join Criteria, and Transmission Costs.
• Access Method: The methods which are used to access data from distributed environment like hashing, indexing etc.
• Join Criteria: In distributed database data is presented in different sites. Join criteria is used to join the different sites to get optimized result.
• Transmission Costs: If data from multiple sites must be joined to satisfy a single query, then the cost of transmitting the results from intermediate steps needs to be factored into the equation. At times, it may be more cost effective simply to ship entire tables across the network to enable processing to occur at a single site, thereby reducing overall transmission costs. This component of query optimization is an issue only in a distributed environment.
There are many distributed query optimization issues some of them are types of optimizers, optimization granularity, network topologies and optimization timing.
AN OPTIMIZATION EXAMPLE
In order to understand distributed query optimization more fully, let’s take a look at an example of a query accessing tables in multiple locations. Consider the ramifications of coding a program to simply retrieve a list of all teachers who have taught physics to seniors. Furthermore, assume that the COURSE table and the ENROLLMENT table exist at Site 1; the STUDENT table exists at Site 2.If either all of the tables existed at a single site, or the DBMS supported distributed multi-site requests. However, if the DMBS can not perform (or optimize) distributed multi-site requests, programmatic optimization must be performed. There are at least six different ways to go about optimizing this three-table join.
Option 1: Start with Site 1 and join COURSE and ENROLLMENT, selecting only physics courses. For each qualifying row, move it to Site2 to be joined with STUDENT to see if any are seniors.
Option 2: Start with Site 1 and join COURSE and ENROLLMENT, selecting only physics courses, and move the entire result set to Site 2 to be joined with STUDENT, checking for senior students only.
Option 3: Start with Site 2 and select only seniors from STUDENT. For each of these examine the join of COURSE and ENROLLMENT at Site 1 for physics classes.
Option 4: Start with Site 2 and select only seniors from STUDENT at Site 2, and move the entire result set to Site 1 to be joined with COURSE and ENROLLMENT, checking for physics classes only.
Option 5: Move the COURSE and ENROLLMENT tables to Site 2 and proceed with a local three-table join.
Option 6: Move the STUDENT to Site 1 and proceed with a local three-table join.
Which of these six options will perform the best? Unfortunately, the only correct answer is "It depends." The optimal choice will depend upon:
1. the size of the tables;
2.the size of the result sets — that is, the number of qualifying rows
and their length in bytes; and
3.the efficiency of the network.
New query optimization techniques in distributed database:
• Cost – based query optimization:
Objective of Cost-based query optimization is estimate the cost of different equivalent query expressions and chose the execution plan with the lowest cost.
Cost based query optimization mainly depends on two factors they are solution space and cost function.
Solution space: this is depends on the set of equivalent algebraic expressions.
Cost function: cost function is equivalent to summation of i/o cost, CPU cost and communication cost. it also depends on different distributed environments.
By considering these factors cost based query optimization is processed in distributed environment.
• Heuristic – based query optimization :
Heuristic based query optimization process involve following steps :
Perform Selection operations as early as possible.
Combine Cartesian product with subsequent selection whose predicate represents join condition into a Join operation.
Use associatively of binary operations to rearrange leaf nodes so leaf nodes with most restrictive Selection operations executed first.
Perform Projections operations as early as possible.
Eliminate duplicate computations.
It is mainly used to minimize cost of selecting sites for multi join operations.
• Rank-Aware Query Optimization :
Ranking is an important property that needs to be fully supported by current relational query engines. Recently, several rank-join query operators have been proposed based on rank aggregation algorithms. Rank-join operators progressively rank the join results while performing the join operation. The new operators have a direct impact on traditional query processing and optimization. We introduce a rank-aware query optimization framework that fully integrates rank-join operators into relational query engines. The framework is based on extending the System R dynamic programming algorithm in both enumeration and pruning. We define ranking as an interesting property that triggers the generation of rank-aware query plans. Unlike traditional join operators, optimizing for rank-join operators depends on estimating the input cardinality of these operators. We introduce a probabilistic model for estimating the input cardinality, and hence the cost of a rank-join operator. To our knowledge, this paper is the first effort in estimating the needed input size for optimal rank aggregation algorithms. Costing ranking plans, although challenging, is key to the full integration of rank-join operators in real-world query processing engines. We experimentally evaluate our framework by modifying the query optimizer of an open-source database management system. .
Query Optimization problems and some solutions in distributed database:
• Stochastic query optimization problem for multiple join:
The model of three joins stored at two sites leads to a nonlinear programming problem, which has an analytical solution. The model with four sites leads to a special kind of nonlinear optimization problem (P).This problem is known as stochastic query optimization problem for multiple join. This problem can not be solved analytically. It is proved that problem (P) has at least one solution and two new methods are presented for solving the problem. An ad hoc constructive model and a new evolutionary technique is used for solving problem (P). Results obtained by the two considered optimization approaches are compared.
• Problem of optimizing queries that involve set operations:
The problem of optimizing queries that involves set operations (set queries) in a distributed relational database system. A particular emphasis is put on the optimization of such queries in horizontally partitioned database systems. A mathematical programming model of the set query problem is developed and its NP-completeness is proved. Solution procedures are proposed and computational results presented. One of the main results of the computational experiments is that, for many queries, the solution procedures are not sensitive to errors in estimating the size of results of set operations.
• Stochastic optimization problem for multiple queries:
Many algorithms have been devised for minimizing the costs associated with obtaining the answer to a single, isolated query in a distributed database system. However, if more than one query may be processed by the system at the same time and if the arrival times of the queries are unknown, the determination of optimal query-processing strategies becomes a stochastic optimization problem. In order to cope with such problems, a theoretical state-transition model is presented that treats the system as one operating under a stochastic load. Query-processing strategies may then be distributed over the processors of a network as probability distributions, in a manner which accommodates many queries over time. It is then shown that the model leads to the determination of optimal query-processing strategies as the solution of mathematical programming problems, and analytical results for several examples are presented. Furthermore, a divide-and-conquer approach is introduced for decomposing stochastic query optimization problems into distinct sub problems for processing queries sequentially and in parallel.
• Sum product optimization problem:
most distributed query optimization problems can be transformed into an optimization problem comprising a set of binary decisions, termed Sum Product Optimization (SPO) problem. We first prove SPO is NP-hard in light of the NP-completeness of a well-known problem, Knapsack (KNAP). Then, using this result as a basis, we prove that five classes of distributed query optimization problems, which cover the majority of distributed query optimization problems previously studied in the literature, are NP-hard by polynomials reducing SPO to each of them. The detail for each problem transformation is derived. We not only prove the conjecture that many prior studies relied upon, but also provide a frame work for future related studies.
Advantages of distributed database:
• Reflects organizational structure — database fragments are located in the departments they relate to.
• Local autonomy — a department can control the data about them (as they are the ones familiar with it.)
• Improved availability — a fault in one database system will only affect one fragment, instead of the entire database.
• Improved performance — data is located near the site of greatest demand, and the database systems themselves are parallelized, allowing load on the databases to be balanced among servers. (A high load on one module of the database won't affect other modules of the database in a distributed database.)
• Economics — it costs less to create a network of smaller computers with the power of a single large computer.
• Modularity — systems can be modified, added and removed from the distributed database without affecting other modules (systems).
Advantages of Distributed query optimization:
Distributed Query optimization techniques provide exact results in distributed environment.
These techniques provide efficient performance in different distributed networks.
In internet these techniques helps to search exact information and extract the required one.
Conclusion:
Most real-world data is not well structured. Today's databases typically contain much non-structured data such as text, images, video, and audio, often distributed across computer networks. To process these kinds of data and optimize queries on this data requires these distributed query optimization techniques.
TOPIC: new query processing and optimization techniques in distributed database
Now-a-days data is distributed across the networks to make total world as a global village. Distributed database management systems (DDBMS) are amongst the most important and successful software developments in this decade. They are enabling computing power and data to be placed within the user environment close to the point of user activities. The performance efficiency of DDBMS is deeply related to the query processing and optimization strategies involving data transmission over different nodes through the network. Most real-world data is not well structured. Today's databases typically contain much non-structured data such as text, images, video, and audio, often distributed across computer networks. This situation demands new query processing, optimization techniques in distributed database environment. These techniques will provide efficient performance in optimization of query processing strategies in a distributed databases environment. this paper mainly focuses distributed query optimization problem
The main contents of this paper are:
• Introduction to Distributed database, Query processing and Query optimization strategies.
• Distributed Query Processing Methodology.
• Distributed Query Optimization.
• New query optimization techniques in distributed database.
• Distributed Query Optimization problems and some solutions.
• Advantages of query optimization techniques in distributed database.
• Conclusion.
Introduction:
Distributed database:
Distributed database is a database that is under the control of a central database management system (DBMS) in which storage devices are not all attached to a common CPU. It may be stored in multiple computers located in the same physical location, or may be dispersed over a network of interconnected computers. Collections of data (e.g. in a database) can be distributed across multiple physical locations.
Query processing :
Query processing is defined as the activities involved in parsing, validating, optimizing and executing a query.
The main aim of query processing is Transform query written in high-level language (e.g. SQL), into correct and efficient execution strategy expressed in low-level language (implementing Relational Algebra) and to find information in one or more databases and deliver it to the user quickly and efficiently.
High level user query -> Query Processor ->low-level data manipulation commands
Query optimization :
Query optimization is defined as the activity of choosing an efficient execution strategy for processing a query. Query optimization is a part of query processing.
The main aims of query optimization are to choose a transformation that minimizes resource usage, Reduce total execution time of query and also reduce response time of query
Distributed Query Processing Methodology:
Distributed query processing contains four stages which are query decomposition, data localization, global optimization and local optimization.
• Query decomposition: in this stage we are giving Calculus Query as an input and we are getting output as Algebraic Query. This stage is again divided in four stages they are Normalization, Analysis, Simplification and Restructuring
• Data localization: in this stage Algebraic query on distributed relations is input and fragment query is output. In this stage fragment involvement is determined.
• Global optimization: in this stage Fragment Query is input and optimized fragment query is output. Finding best global schedule is done in this stage.
• Local optimization: Best global execution schedule is input and localized optimization queries are output in this stage. it contain two sub stages they are Select the best access path ,Use the centralized optimization techniques.
Distributed Query Optimization:
Distributed query optimization is defined as finding efficient execution strategy path in distributed networks.
Query optimization is difficult in distributed environment.
There are three components of distributed query optimization they are Access Method, Join Criteria, and Transmission Costs.
• Access Method: The methods which are used to access data from distributed environment like hashing, indexing etc.
• Join Criteria: In distributed database data is presented in different sites. Join criteria is used to join the different sites to get optimized result.
• Transmission Costs: If data from multiple sites must be joined to satisfy a single query, then the cost of transmitting the results from intermediate steps needs to be factored into the equation. At times, it may be more cost effective simply to ship entire tables across the network to enable processing to occur at a single site, thereby reducing overall transmission costs. This component of query optimization is an issue only in a distributed environment.
There are many distributed query optimization issues some of them are types of optimizers, optimization granularity, network topologies and optimization timing.
AN OPTIMIZATION EXAMPLE
In order to understand distributed query optimization more fully, let’s take a look at an example of a query accessing tables in multiple locations. Consider the ramifications of coding a program to simply retrieve a list of all teachers who have taught physics to seniors. Furthermore, assume that the COURSE table and the ENROLLMENT table exist at Site 1; the STUDENT table exists at Site 2.If either all of the tables existed at a single site, or the DBMS supported distributed multi-site requests. However, if the DMBS can not perform (or optimize) distributed multi-site requests, programmatic optimization must be performed. There are at least six different ways to go about optimizing this three-table join.
Option 1: Start with Site 1 and join COURSE and ENROLLMENT, selecting only physics courses. For each qualifying row, move it to Site2 to be joined with STUDENT to see if any are seniors.
Option 2: Start with Site 1 and join COURSE and ENROLLMENT, selecting only physics courses, and move the entire result set to Site 2 to be joined with STUDENT, checking for senior students only.
Option 3: Start with Site 2 and select only seniors from STUDENT. For each of these examine the join of COURSE and ENROLLMENT at Site 1 for physics classes.
Option 4: Start with Site 2 and select only seniors from STUDENT at Site 2, and move the entire result set to Site 1 to be joined with COURSE and ENROLLMENT, checking for physics classes only.
Option 5: Move the COURSE and ENROLLMENT tables to Site 2 and proceed with a local three-table join.
Option 6: Move the STUDENT to Site 1 and proceed with a local three-table join.
Which of these six options will perform the best? Unfortunately, the only correct answer is "It depends." The optimal choice will depend upon:
1. the size of the tables;
2.the size of the result sets — that is, the number of qualifying rows
and their length in bytes; and
3.the efficiency of the network.
New query optimization techniques in distributed database:
• Cost – based query optimization:
Objective of Cost-based query optimization is estimate the cost of different equivalent query expressions and chose the execution plan with the lowest cost.
Cost based query optimization mainly depends on two factors they are solution space and cost function.
Solution space: this is depends on the set of equivalent algebraic expressions.
Cost function: cost function is equivalent to summation of i/o cost, CPU cost and communication cost. it also depends on different distributed environments.
By considering these factors cost based query optimization is processed in distributed environment.
• Heuristic – based query optimization :
Heuristic based query optimization process involve following steps :
Perform Selection operations as early as possible.
Combine Cartesian product with subsequent selection whose predicate represents join condition into a Join operation.
Use associatively of binary operations to rearrange leaf nodes so leaf nodes with most restrictive Selection operations executed first.
Perform Projections operations as early as possible.
Eliminate duplicate computations.
It is mainly used to minimize cost of selecting sites for multi join operations.
• Rank-Aware Query Optimization :
Ranking is an important property that needs to be fully supported by current relational query engines. Recently, several rank-join query operators have been proposed based on rank aggregation algorithms. Rank-join operators progressively rank the join results while performing the join operation. The new operators have a direct impact on traditional query processing and optimization. We introduce a rank-aware query optimization framework that fully integrates rank-join operators into relational query engines. The framework is based on extending the System R dynamic programming algorithm in both enumeration and pruning. We define ranking as an interesting property that triggers the generation of rank-aware query plans. Unlike traditional join operators, optimizing for rank-join operators depends on estimating the input cardinality of these operators. We introduce a probabilistic model for estimating the input cardinality, and hence the cost of a rank-join operator. To our knowledge, this paper is the first effort in estimating the needed input size for optimal rank aggregation algorithms. Costing ranking plans, although challenging, is key to the full integration of rank-join operators in real-world query processing engines. We experimentally evaluate our framework by modifying the query optimizer of an open-source database management system. .
Query Optimization problems and some solutions in distributed database:
• Stochastic query optimization problem for multiple join:
The model of three joins stored at two sites leads to a nonlinear programming problem, which has an analytical solution. The model with four sites leads to a special kind of nonlinear optimization problem (P).This problem is known as stochastic query optimization problem for multiple join. This problem can not be solved analytically. It is proved that problem (P) has at least one solution and two new methods are presented for solving the problem. An ad hoc constructive model and a new evolutionary technique is used for solving problem (P). Results obtained by the two considered optimization approaches are compared.
• Problem of optimizing queries that involve set operations:
The problem of optimizing queries that involves set operations (set queries) in a distributed relational database system. A particular emphasis is put on the optimization of such queries in horizontally partitioned database systems. A mathematical programming model of the set query problem is developed and its NP-completeness is proved. Solution procedures are proposed and computational results presented. One of the main results of the computational experiments is that, for many queries, the solution procedures are not sensitive to errors in estimating the size of results of set operations.
• Stochastic optimization problem for multiple queries:
Many algorithms have been devised for minimizing the costs associated with obtaining the answer to a single, isolated query in a distributed database system. However, if more than one query may be processed by the system at the same time and if the arrival times of the queries are unknown, the determination of optimal query-processing strategies becomes a stochastic optimization problem. In order to cope with such problems, a theoretical state-transition model is presented that treats the system as one operating under a stochastic load. Query-processing strategies may then be distributed over the processors of a network as probability distributions, in a manner which accommodates many queries over time. It is then shown that the model leads to the determination of optimal query-processing strategies as the solution of mathematical programming problems, and analytical results for several examples are presented. Furthermore, a divide-and-conquer approach is introduced for decomposing stochastic query optimization problems into distinct sub problems for processing queries sequentially and in parallel.
• Sum product optimization problem:
most distributed query optimization problems can be transformed into an optimization problem comprising a set of binary decisions, termed Sum Product Optimization (SPO) problem. We first prove SPO is NP-hard in light of the NP-completeness of a well-known problem, Knapsack (KNAP). Then, using this result as a basis, we prove that five classes of distributed query optimization problems, which cover the majority of distributed query optimization problems previously studied in the literature, are NP-hard by polynomials reducing SPO to each of them. The detail for each problem transformation is derived. We not only prove the conjecture that many prior studies relied upon, but also provide a frame work for future related studies.
Advantages of distributed database:
• Reflects organizational structure — database fragments are located in the departments they relate to.
• Local autonomy — a department can control the data about them (as they are the ones familiar with it.)
• Improved availability — a fault in one database system will only affect one fragment, instead of the entire database.
• Improved performance — data is located near the site of greatest demand, and the database systems themselves are parallelized, allowing load on the databases to be balanced among servers. (A high load on one module of the database won't affect other modules of the database in a distributed database.)
• Economics — it costs less to create a network of smaller computers with the power of a single large computer.
• Modularity — systems can be modified, added and removed from the distributed database without affecting other modules (systems).
Advantages of Distributed query optimization:
Distributed Query optimization techniques provide exact results in distributed environment.
These techniques provide efficient performance in different distributed networks.
In internet these techniques helps to search exact information and extract the required one.
Conclusion:
Most real-world data is not well structured. Today's databases typically contain much non-structured data such as text, images, video, and audio, often distributed across computer networks. To process these kinds of data and optimize queries on this data requires these distributed query optimization techniques.
PALLADIUM A REVOLUTIONARY BREAK THROUGH
ABSTRACT
“HACKERS” ,now -a -days commonly spelled term”.
Goal Threat
1. Data confidentiality Exposure of data
2. Data integrity Tampering with data
3. System availability Daniel of service
As we tend towards a more and more computer centric world, the concept of data security has attained a paramount importance. Though present day security systems offer a good level of protection, they are incapable of providing a “trust worthy” environment and are vulnerable to unexpected attacks. Palladium is a content protection concept that has spawned from the belief that the PC, as it currently stands, is not architecturally equipped to protect a user forms pitfalls and challenges that an all-pervasive network such as the Internet poses. As a drastic change in PC hardware is not feasible largely due to economic reasons, palladium hopes to introduce a minimal change in this front. A paradigm shift is awaited in this scenario with the advent of usage of palladium, thus making content protection a shared concern of both software and hardware. In the course of this paper the revolutionary aspects of palladium are discussed in detail.
A case study to restructure the present data security system of JNTU examination system using palladium is put forward.
INTRODUCTION
Need for security:
Many organizations possess valuable information they guard closely. As more of this information is stored in computers the need of data security becomes increasingly important. Protecting this information against unauthorized usage is therefore a major concern for both operating systems and users alike.
Threats of data:
For a specific perspective computer systems have 3 general goals with corresponding threats to them as listed below.
The first one, data confidentiality is concerned with secret data remaining secret. More specifically if the owner of some data has decided that the data should be available only to certain people and no others, then the system should guarantee that release of data to unauthorized people does not occur. Another aspect of this is individual privacy.
The second goal, data integrity, means that unauthorized users should not be able to modify any data without the owner’s permission. Data modification in this context includes not only changing the data, but also removing data and adding false data as well. Thus it is very important that a system should guarantee that data deposited in it remains unchanged until the owner decides to do so.
The third goal, system availability, means that nobody can disturb the system to make unstable. It must be able to ensure that authorized persons have access to the data and do not suffer from denial of service. The most classical example of a threat it this is excessive ‘PING’ing of a web site, in order to slow it down.
Types of data threats:-
VIRUS:-
Basically a virus is a piece of code that replicates itself and usually does some damage. In a sense the writer of a virus is called an intruder, often with high technical skills. In the same breath it must be said that a virus need not always be intentional and can simply be a code with disastrous run time errors. The difference between a conventional intruder and a virus is that the former refers to person who is personally trying to break into a system to cause damage whereas the latter is a program written by such a person and then released into the world hoping it causes damage.
The most common types of viruses are: executable program viruses, memory resident viruses, boot sector viruses, device driver viruses, macro viruses, source code viruses, Trojan horses etc.
INTRUDERS:-
In security literature people who are noising around places are called intruders or sometimes adversaries. Intruders can be broadly divided as passive and active. Passive intruders just want to read the files they are not authorized to. Active intruders are more malicious and intend to make unauthorized changes to data. Some of the common activities indulged by intruders are:
Casual prying:
Non-technical users who wish to read other people’s e-mail and private files mostly do this.
Snooping:
It refers to the breaking of security of a shared computer system or a server. Snooping is generally done as a challenge and is not aimed at stealing or Tampering of confidential data.
Commercial Espionage:
This refers to the determined attempts to make money using secret data. For example an employee in an organization can secure sensitive data and sell it away to rival companies for monetary gains.
It is very important that potential intruders and their corresponding activities are taken into consideration before devising a security system. This is essential as the level of threat and intended damage differs from one to another.
AN OVERVIEW OF SOME OF THE PRESENT DAY DATA SECURITY SYSTEMS
Cryptography
Cryptography is the method in which a message or file, called plain text, is taken and encrypted into cipher text in such a way that only authorized people know how to convert it back to plain text. This is done commonly in four ways:
Secret key cryptography, public key cryptography, one way function cryptography and digital signatures. Unless the encryption technique used is very complex it is possible, with some effort, for crackers to decrypt files.
User authentication
It is a method employed by the operating system or a program of a computer to determine the identity of a user. Types of user authentication are:
Authentication using passwords, authentication using physical objects (like smart cards, ATM cards etc.), authentication using biometrics (like fingerprints, retinal pattern scan, signature analysis, voice recognition etc.). Inherent problems of user authentication are password cracking, duplication of physical objects and simulation of biometrics by artificial objects.
Anti-virus software
An anti virus software scans every executable file on a computer’s disk looking for viruses known in its database. It then repairs, quarantines or deletes an infected files. However a clever virus can infect the anti-virus software itself. Some of the popular anti-virus softwares are Norton, PCcillin, MCcafee etc.
Firewalls
It is a method of preventing unauthorized access to a computer system often found in network computes. A firewall is designed to provide normal service to authorized users while at the same time preventing unauthorized users from gaining access to the system. In reality they add a level of inconvenience to legal users and their ability to control illegal access may be questionable.
Palladium-“a revolutionary break through in data security”
Palladium is the code name for a revolutionary set of “features” for the “windows” operation system. The code name of this initiative – “palladium”, is a moniker drawn from the Greek mythological goddess of wisdom and protector of civilized life.
Till now, most forms of data security have been software oriented with little of no hardware involvement. Palladium can he touted as the first technology to develop software-hardware synchronization for better data security. Hardware changes incorporated by palladium are reflected in the key components of the CPU, a motherboard chip (cryptographic co-processor), input and output components such as the graphic processor etc.
When combined with anew breed of hardware and applications, these “features” will give individuals and groups of users greater data security, personal privacy, and system integrity. In addition, palladium will offer enterprise consumers significant new benefits for network security and content protection.
Core principles of the palladium initiative:
Palladium is not a separate operating system. It is based in architectural enhancements to the window’s kernel and to computer hardware, including the CPU, peripherals and chipsets, to create a new trusted execution subsystem.
Palladium will not eliminate any features of windows that users have come to rely on; everything that runs today will continue to run with palladium.
It is important to note that while today’s applications and devices will continue to work in “palladium”, they will gain little to no benefit from “palladium” environment or new applications must be written.
In addition palladium does not change what can be programmed or run on the computing platform. Palladium will operate with any program the user specific while maintaining security.
Aspects of palladium
Palladium comprises two key components: hardware and software.
Hardware components:-
Engineered for ensuring the protected execution of applications and processes, the protected operating environment provides the following basic mechanisms:
Trusted space (or curtained memory):
This is an execution space is protected form external software attacks such as a virus. Trusted space is set up and maintained by the nexus and has access to various services provided by palladium, such as sealed storage. In other word s it is protected RAM.
Sealed storage:
Sealed storage is an authenticated mechanism that allows a program to store secrets that cannot be retrieved by non-trusted programs such as a virus or Trojan horse. Information in sealed storage can’t be read by other non-trusted programs (sealed storage cannot be read by unauthorized secure programs, for that matter, and cannot be read even if another operating system is booted or the disk is carried to another machine.). These stored secrets can be tied to the machine, the nexus or the application. Palladium will also provide mechanisms for the safe and controlled backup and migration of secrets to other machines. In other words it is a secured and encrypted part of the hard disk.
Secure input and output:
A secure path from the keyboard and mouse to palladium applications and a secure path from palladium applications to the screen ensure input-output security.
Attestation:
Attestation is a mechanism that allows the user to reveal selected characteristics of the operating environment to external requestors. In reality it takes the form of an encryption co-processor. It is entrusted with the job of encryption and decryption of data “to and from” the “sealed storage”.
These basic mechanisms provide a platform for building distributed trusted software.
Software components:-
The following are the software components of palladium:
Nexus(a technology formerly referred to as the “trusted operating root(TOR)”):
This component manages trust functionality for palladium user-mode processes (agents). The nexus executes in kernel mode in the trusted space. It provides basic services to trusted agents, such as the establishment of the process mechanisms for communicating with trusted agents and other applications, and special trust services such as attestation of requests of requests and the sealing and unsealing of secrets.
Trusted agents:
A trusted agent is a program, a part of a program or a service that runs in user mode in the trusted space. A trusted agent calls the nexus for security-related services and critical general services such as memory management. A trusted agent is able to store secrets using sealed storage and authenticates itself using the attestation services of the nexus. One of the main principles of trusted agents is that they can be trusted or not trusted by multiple entities, such as the user, an IT department, a merchant or a vendor. Each trusted agent or entity controls its own sphere of trust and they need not trust or rely on each other.
Together, the nexus and trusted agents provide the following features:
Trusted data storage, encryption services for applications to ensure data integrity and protection.
Authenticated boot, facilities to enable hardware and software and software to authenticate itself.
WORKING OF PALLADIUM
Palladium is a new hardware and software architecture. This architecture will include a new security computing chip and design changes to a computer’s central processing unit (CPU), chipsets, and peripheral devices, such as keyboards and printers. It also will enable applications and components of these applications to run in a protected memory space that is highly resistant to tempering and interference.
The PC-specific secret coding within palladium makes stolen files useless on other machines as they are physically and cryptographically locked within the hardware of the machine. This means software attack can’t expose these secrets. Even if a sophisticated hardware attack were to get at them, these core system secrets would only be applicable to the data within a single computer and could not be used on other computes.
PROTECTION USING PALLADIUM
Palladium prevents identity theft and unauthorized access to personal data on the user’s device while on the Internet and on other networks. Transactions and processes are verifiable and reliable through the attestable hardware and software architecture and they cannot be imitated.
With palladium, a system’s secrets are locked in the computer and are only revealed on terms that the user has specified. In addition, the trusted user interface prevents snooping and impersonation. The user controls what is revealed and can separate categories of data on a single computer into distinct realms. Like a set of vaults, realms provide the assurance of seperability. With distinct identifiers, policies and categories of data for each, realms allow a user to have a locked-down work environment and fully open surfing environment at the same time, on the same computer.
Finally, the “palladium” architecture will enable a new class of identity service providers that can potentially offer users choices for how their identities are represented in online transactions. These service providers can also ensure that the user is in control of policies for how personal information is revealed to others. In addition, palladium will allow users to employ identity service providers of their own choice.
From the perspective of privacy (and anti-virus protection), one of the key benefits of palladium is the ability for users to effectively delegate certification of code. Anyone can certify ‘palladium’ hardware or software, and it is expected that many companies and organizations will offer this service. Allowing multiple parties to independently evaluate and certify “palladium” capable systems means that users will be able to obtain verification of the system’s operation from organizations that they trust. In addition, this will form the basis for a strong business incentive to preserve and enhance privacy and security. Moreover, palladium allows any number of trusted internal or external entities to interact with a trusted component or trusted platform.
SHORTCOMINGS AND PIT FALLS OF PALLADIUM
Though palladium can provide a higher degree of much needed data security it is with its share of problems like:
Software and applications have to be rewritten to synchronize with palladium or new applications must be written.
Changes are to be made to the existing computer hardware to support palladium.
It would be a long time before this technology became commonplace.
CASE STUDY
RESTRUCTURING DATA SECURITY OF JNTU EXAMINATIONS SYSTEM USING PALLADIUM
EXISTING SYSTEM
In order to eliminate the leakage of question papers, the Jawaharlal Nehru Technological University (J.N.T.U), Hyderabad, has recently decided to implement the system of electronic distribution of examination papers (EDEP) – a new method of conduct of examinations.
In this system 4 sets of question papers are generated and encrypted into a “college-specific” C.D.
The encrypted CD is supplied to the examination centers about 3 days in advance.
The question papers in encrypted form are also made available on the JNTU examination website.
Password to read the CDs is supplied one hour before the commencement of examination to the principal/chief superintendent through Internet, cell phone, telephone or fax.
The principal soon after receipt of password decrypts the original question papers of that day using the software supplied by JNTU examination branch.
The EDEP employs the method of public key cryptography.
Though this system is largely stable and secure it has certain loopholes like:
As the encrypted question papers are also available on the internet there is every chance of crackers downloading and trying to decrypt them.
The student and teacher community alike has resented this method of 4 sets of question papers.
There is every chance of failure or miss-match of the college specific C.D., due to the large number of affiliate colleges (as is been observed in some cases).
Also, in one case, a previous examination C.D. was mistakenly decrypted, and the question papers thus printed, distributed initially at an examination center.
Palladium-as a solution
Palladium is based on the concept of trusted space. A closed sphere of trust binds data or a service, to both a set of users and to a set of acceptable applications. Due to this an unauthorized user cannot access the data or software which is based on a server.
In the revised system the encrypted question papers are put up on the J.N.T.U’s palladium based server and all the affiliate colleges use college-specific palladium computers. It works as follows:
A third party trusted agent (government or private programmed) is employed who is responsible for granting of access to JNTU examination server. It processes the requests and forwards only those certified by the “nexus” of the JNTU’s palladium based server.
If an unauthorized system (without palladium) forwards a request it is immediately rejected by the server’s trusted agent. Even if an unauthorized palladium PC tries to access the server its request is rejected.
The PC-specific secret coding with in palladium makes stolen files useless on other machines as they are physically and cryptographically locked with in the hardware of the server at trusted computer.
During examinations the palladium computer of the college issues a request to the common trusted agent (of JNTU and college) via Internet. This request is granted and the college accesses each-particular question paper pertaining to that day.
ADVANTAGES
As the process of question paper down load is highly secure, the chances of leakage are literally nil.
Since this method is highly trustworthy a single set question paper system can be employed.
An advanced system of Internet communication can be adopted for a broader reach, thus eliminating the role of C.D.
Since the download of question papers is “request-specific and bound” there can not be a case of question paper mis-match.
CONCLUSION
Today, IT managers face tremendous challenges due to the inherent openness of end-user machines, and millions of people simply avoid some online transactions out of fear. However, with the usage of “palladium” systems, trustworthy, secure interactions will become possible. This technology will provide tougher security defenses and more abundant privacy benefits than ever before. With palladium, users will have unparalleled power over system integrity, personal privacy and data security.
Thus it wouldn’t be exaggeration to say that palladium is all to secure the computing world in ways unimaginable.
REFERENCES
Modern operating systems by Andrew.S.Tanenbaum.
Microsoft press pass.
J.N.T.U website.
“HACKERS” ,now -a -days commonly spelled term”.
Goal Threat
1. Data confidentiality Exposure of data
2. Data integrity Tampering with data
3. System availability Daniel of service
As we tend towards a more and more computer centric world, the concept of data security has attained a paramount importance. Though present day security systems offer a good level of protection, they are incapable of providing a “trust worthy” environment and are vulnerable to unexpected attacks. Palladium is a content protection concept that has spawned from the belief that the PC, as it currently stands, is not architecturally equipped to protect a user forms pitfalls and challenges that an all-pervasive network such as the Internet poses. As a drastic change in PC hardware is not feasible largely due to economic reasons, palladium hopes to introduce a minimal change in this front. A paradigm shift is awaited in this scenario with the advent of usage of palladium, thus making content protection a shared concern of both software and hardware. In the course of this paper the revolutionary aspects of palladium are discussed in detail.
A case study to restructure the present data security system of JNTU examination system using palladium is put forward.
INTRODUCTION
Need for security:
Many organizations possess valuable information they guard closely. As more of this information is stored in computers the need of data security becomes increasingly important. Protecting this information against unauthorized usage is therefore a major concern for both operating systems and users alike.
Threats of data:
For a specific perspective computer systems have 3 general goals with corresponding threats to them as listed below.
The first one, data confidentiality is concerned with secret data remaining secret. More specifically if the owner of some data has decided that the data should be available only to certain people and no others, then the system should guarantee that release of data to unauthorized people does not occur. Another aspect of this is individual privacy.
The second goal, data integrity, means that unauthorized users should not be able to modify any data without the owner’s permission. Data modification in this context includes not only changing the data, but also removing data and adding false data as well. Thus it is very important that a system should guarantee that data deposited in it remains unchanged until the owner decides to do so.
The third goal, system availability, means that nobody can disturb the system to make unstable. It must be able to ensure that authorized persons have access to the data and do not suffer from denial of service. The most classical example of a threat it this is excessive ‘PING’ing of a web site, in order to slow it down.
Types of data threats:-
VIRUS:-
Basically a virus is a piece of code that replicates itself and usually does some damage. In a sense the writer of a virus is called an intruder, often with high technical skills. In the same breath it must be said that a virus need not always be intentional and can simply be a code with disastrous run time errors. The difference between a conventional intruder and a virus is that the former refers to person who is personally trying to break into a system to cause damage whereas the latter is a program written by such a person and then released into the world hoping it causes damage.
The most common types of viruses are: executable program viruses, memory resident viruses, boot sector viruses, device driver viruses, macro viruses, source code viruses, Trojan horses etc.
INTRUDERS:-
In security literature people who are noising around places are called intruders or sometimes adversaries. Intruders can be broadly divided as passive and active. Passive intruders just want to read the files they are not authorized to. Active intruders are more malicious and intend to make unauthorized changes to data. Some of the common activities indulged by intruders are:
Casual prying:
Non-technical users who wish to read other people’s e-mail and private files mostly do this.
Snooping:
It refers to the breaking of security of a shared computer system or a server. Snooping is generally done as a challenge and is not aimed at stealing or Tampering of confidential data.
Commercial Espionage:
This refers to the determined attempts to make money using secret data. For example an employee in an organization can secure sensitive data and sell it away to rival companies for monetary gains.
It is very important that potential intruders and their corresponding activities are taken into consideration before devising a security system. This is essential as the level of threat and intended damage differs from one to another.
AN OVERVIEW OF SOME OF THE PRESENT DAY DATA SECURITY SYSTEMS
Cryptography
Cryptography is the method in which a message or file, called plain text, is taken and encrypted into cipher text in such a way that only authorized people know how to convert it back to plain text. This is done commonly in four ways:
Secret key cryptography, public key cryptography, one way function cryptography and digital signatures. Unless the encryption technique used is very complex it is possible, with some effort, for crackers to decrypt files.
User authentication
It is a method employed by the operating system or a program of a computer to determine the identity of a user. Types of user authentication are:
Authentication using passwords, authentication using physical objects (like smart cards, ATM cards etc.), authentication using biometrics (like fingerprints, retinal pattern scan, signature analysis, voice recognition etc.). Inherent problems of user authentication are password cracking, duplication of physical objects and simulation of biometrics by artificial objects.
Anti-virus software
An anti virus software scans every executable file on a computer’s disk looking for viruses known in its database. It then repairs, quarantines or deletes an infected files. However a clever virus can infect the anti-virus software itself. Some of the popular anti-virus softwares are Norton, PCcillin, MCcafee etc.
Firewalls
It is a method of preventing unauthorized access to a computer system often found in network computes. A firewall is designed to provide normal service to authorized users while at the same time preventing unauthorized users from gaining access to the system. In reality they add a level of inconvenience to legal users and their ability to control illegal access may be questionable.
Palladium-“a revolutionary break through in data security”
Palladium is the code name for a revolutionary set of “features” for the “windows” operation system. The code name of this initiative – “palladium”, is a moniker drawn from the Greek mythological goddess of wisdom and protector of civilized life.
Till now, most forms of data security have been software oriented with little of no hardware involvement. Palladium can he touted as the first technology to develop software-hardware synchronization for better data security. Hardware changes incorporated by palladium are reflected in the key components of the CPU, a motherboard chip (cryptographic co-processor), input and output components such as the graphic processor etc.
When combined with anew breed of hardware and applications, these “features” will give individuals and groups of users greater data security, personal privacy, and system integrity. In addition, palladium will offer enterprise consumers significant new benefits for network security and content protection.
Core principles of the palladium initiative:
Palladium is not a separate operating system. It is based in architectural enhancements to the window’s kernel and to computer hardware, including the CPU, peripherals and chipsets, to create a new trusted execution subsystem.
Palladium will not eliminate any features of windows that users have come to rely on; everything that runs today will continue to run with palladium.
It is important to note that while today’s applications and devices will continue to work in “palladium”, they will gain little to no benefit from “palladium” environment or new applications must be written.
In addition palladium does not change what can be programmed or run on the computing platform. Palladium will operate with any program the user specific while maintaining security.
Aspects of palladium
Palladium comprises two key components: hardware and software.
Hardware components:-
Engineered for ensuring the protected execution of applications and processes, the protected operating environment provides the following basic mechanisms:
Trusted space (or curtained memory):
This is an execution space is protected form external software attacks such as a virus. Trusted space is set up and maintained by the nexus and has access to various services provided by palladium, such as sealed storage. In other word s it is protected RAM.
Sealed storage:
Sealed storage is an authenticated mechanism that allows a program to store secrets that cannot be retrieved by non-trusted programs such as a virus or Trojan horse. Information in sealed storage can’t be read by other non-trusted programs (sealed storage cannot be read by unauthorized secure programs, for that matter, and cannot be read even if another operating system is booted or the disk is carried to another machine.). These stored secrets can be tied to the machine, the nexus or the application. Palladium will also provide mechanisms for the safe and controlled backup and migration of secrets to other machines. In other words it is a secured and encrypted part of the hard disk.
Secure input and output:
A secure path from the keyboard and mouse to palladium applications and a secure path from palladium applications to the screen ensure input-output security.
Attestation:
Attestation is a mechanism that allows the user to reveal selected characteristics of the operating environment to external requestors. In reality it takes the form of an encryption co-processor. It is entrusted with the job of encryption and decryption of data “to and from” the “sealed storage”.
These basic mechanisms provide a platform for building distributed trusted software.
Software components:-
The following are the software components of palladium:
Nexus(a technology formerly referred to as the “trusted operating root(TOR)”):
This component manages trust functionality for palladium user-mode processes (agents). The nexus executes in kernel mode in the trusted space. It provides basic services to trusted agents, such as the establishment of the process mechanisms for communicating with trusted agents and other applications, and special trust services such as attestation of requests of requests and the sealing and unsealing of secrets.
Trusted agents:
A trusted agent is a program, a part of a program or a service that runs in user mode in the trusted space. A trusted agent calls the nexus for security-related services and critical general services such as memory management. A trusted agent is able to store secrets using sealed storage and authenticates itself using the attestation services of the nexus. One of the main principles of trusted agents is that they can be trusted or not trusted by multiple entities, such as the user, an IT department, a merchant or a vendor. Each trusted agent or entity controls its own sphere of trust and they need not trust or rely on each other.
Together, the nexus and trusted agents provide the following features:
Trusted data storage, encryption services for applications to ensure data integrity and protection.
Authenticated boot, facilities to enable hardware and software and software to authenticate itself.
WORKING OF PALLADIUM
Palladium is a new hardware and software architecture. This architecture will include a new security computing chip and design changes to a computer’s central processing unit (CPU), chipsets, and peripheral devices, such as keyboards and printers. It also will enable applications and components of these applications to run in a protected memory space that is highly resistant to tempering and interference.
The PC-specific secret coding within palladium makes stolen files useless on other machines as they are physically and cryptographically locked within the hardware of the machine. This means software attack can’t expose these secrets. Even if a sophisticated hardware attack were to get at them, these core system secrets would only be applicable to the data within a single computer and could not be used on other computes.
PROTECTION USING PALLADIUM
Palladium prevents identity theft and unauthorized access to personal data on the user’s device while on the Internet and on other networks. Transactions and processes are verifiable and reliable through the attestable hardware and software architecture and they cannot be imitated.
With palladium, a system’s secrets are locked in the computer and are only revealed on terms that the user has specified. In addition, the trusted user interface prevents snooping and impersonation. The user controls what is revealed and can separate categories of data on a single computer into distinct realms. Like a set of vaults, realms provide the assurance of seperability. With distinct identifiers, policies and categories of data for each, realms allow a user to have a locked-down work environment and fully open surfing environment at the same time, on the same computer.
Finally, the “palladium” architecture will enable a new class of identity service providers that can potentially offer users choices for how their identities are represented in online transactions. These service providers can also ensure that the user is in control of policies for how personal information is revealed to others. In addition, palladium will allow users to employ identity service providers of their own choice.
From the perspective of privacy (and anti-virus protection), one of the key benefits of palladium is the ability for users to effectively delegate certification of code. Anyone can certify ‘palladium’ hardware or software, and it is expected that many companies and organizations will offer this service. Allowing multiple parties to independently evaluate and certify “palladium” capable systems means that users will be able to obtain verification of the system’s operation from organizations that they trust. In addition, this will form the basis for a strong business incentive to preserve and enhance privacy and security. Moreover, palladium allows any number of trusted internal or external entities to interact with a trusted component or trusted platform.
SHORTCOMINGS AND PIT FALLS OF PALLADIUM
Though palladium can provide a higher degree of much needed data security it is with its share of problems like:
Software and applications have to be rewritten to synchronize with palladium or new applications must be written.
Changes are to be made to the existing computer hardware to support palladium.
It would be a long time before this technology became commonplace.
CASE STUDY
RESTRUCTURING DATA SECURITY OF JNTU EXAMINATIONS SYSTEM USING PALLADIUM
EXISTING SYSTEM
In order to eliminate the leakage of question papers, the Jawaharlal Nehru Technological University (J.N.T.U), Hyderabad, has recently decided to implement the system of electronic distribution of examination papers (EDEP) – a new method of conduct of examinations.
In this system 4 sets of question papers are generated and encrypted into a “college-specific” C.D.
The encrypted CD is supplied to the examination centers about 3 days in advance.
The question papers in encrypted form are also made available on the JNTU examination website.
Password to read the CDs is supplied one hour before the commencement of examination to the principal/chief superintendent through Internet, cell phone, telephone or fax.
The principal soon after receipt of password decrypts the original question papers of that day using the software supplied by JNTU examination branch.
The EDEP employs the method of public key cryptography.
Though this system is largely stable and secure it has certain loopholes like:
As the encrypted question papers are also available on the internet there is every chance of crackers downloading and trying to decrypt them.
The student and teacher community alike has resented this method of 4 sets of question papers.
There is every chance of failure or miss-match of the college specific C.D., due to the large number of affiliate colleges (as is been observed in some cases).
Also, in one case, a previous examination C.D. was mistakenly decrypted, and the question papers thus printed, distributed initially at an examination center.
Palladium-as a solution
Palladium is based on the concept of trusted space. A closed sphere of trust binds data or a service, to both a set of users and to a set of acceptable applications. Due to this an unauthorized user cannot access the data or software which is based on a server.
In the revised system the encrypted question papers are put up on the J.N.T.U’s palladium based server and all the affiliate colleges use college-specific palladium computers. It works as follows:
A third party trusted agent (government or private programmed) is employed who is responsible for granting of access to JNTU examination server. It processes the requests and forwards only those certified by the “nexus” of the JNTU’s palladium based server.
If an unauthorized system (without palladium) forwards a request it is immediately rejected by the server’s trusted agent. Even if an unauthorized palladium PC tries to access the server its request is rejected.
The PC-specific secret coding with in palladium makes stolen files useless on other machines as they are physically and cryptographically locked with in the hardware of the server at trusted computer.
During examinations the palladium computer of the college issues a request to the common trusted agent (of JNTU and college) via Internet. This request is granted and the college accesses each-particular question paper pertaining to that day.
ADVANTAGES
As the process of question paper down load is highly secure, the chances of leakage are literally nil.
Since this method is highly trustworthy a single set question paper system can be employed.
An advanced system of Internet communication can be adopted for a broader reach, thus eliminating the role of C.D.
Since the download of question papers is “request-specific and bound” there can not be a case of question paper mis-match.
CONCLUSION
Today, IT managers face tremendous challenges due to the inherent openness of end-user machines, and millions of people simply avoid some online transactions out of fear. However, with the usage of “palladium” systems, trustworthy, secure interactions will become possible. This technology will provide tougher security defenses and more abundant privacy benefits than ever before. With palladium, users will have unparalleled power over system integrity, personal privacy and data security.
Thus it wouldn’t be exaggeration to say that palladium is all to secure the computing world in ways unimaginable.
REFERENCES
Modern operating systems by Andrew.S.Tanenbaum.
Microsoft press pass.
J.N.T.U website.
role of grid computing in internet
ABSTRACT
In recent years, numerous organizations have been vying for donated resources for their grid applications. Potential resource donors are inundated with worth- while grid projects such as discovering a cure for AIDS, finding large prime numbers, and searching for extraterrestrial intelligence. We believe that fundamental to the establishment of a grid computing framework where all (not just large organizations) are able to effectively tap into the resources available on the global network is the establishment of trust between grid application developers and resource donors. Resource donors must be able to trust that their security, safety, and privacy policies will be respected by programs that use their systems.
The purpose of this seminar to give the basic overview of Grid computing, in such way that reader will able to understand basic concept of grid computing, principal operation and some of the issues of Grid computing.
Grid computing enables the use and pooling of computer and data resources to solve complex mathematical problems. The technique is the latest development in an evolution that earlier brought forth such advances as distributed computing, the Worldwide Web, and collaborative computing.
GRID COMPUTING:-
Grid computing is form of networking unlike conventional network that focus on communications among devices. It harnesses unused processing cycles of all computers in a network for solving problems too intensive for any stand-alone machine.
Grid computing is a method of harnessing the power of many computers in network to solve problems requiring a large numbers of processing cycles and involving huge amount of data in grid computing pcs, servers and workstations are linked together so that computing capacity is never wasted.
So rather than using a network of computers simply to communicate and transfer data, grid computing taps the unused processor cycles or numerous i.e thousands of computers. It is distributed computing taken to the next evolutionary level .The goal of grid computing is to create the illusion of a simple yet large and powerful self managing virtual computer out of large collection of connected heterogeneous system sharing various combination of resources grid . Computing is a way to enlist large no of machines to work on multipart computational problem such as circuit analysis or mechanical design. It harnesses a diverse array of machines and other resources to rapidelly. Process to solve problem beyond an organization's available capacity. Once a proper infrastructure is in place, a user will have access to a virtual computer that is reliable and adaptable to the users, for this, there must be standard for grid computing that will allow a secure and robust infrastructure to be built. Standards such as Open Grid Services Architecture (OGSA) and tools such as provided by Globus Toolkit provide the necessary framework. Grid computing uses open source protocol and software called Globus. Globus software allows computes to share data, power and software.
BASIC CONCEPT OF GRID COMPUTING
HOW IT WORKS?
The computer is tied to network such as internet, which enables regular people with home pcs to participate in the grid project from anywhere in the world. The pc owners have to download simple software from the projects host site. And the project sites use the software that can divide and distribute the pises of program to thousands of computers for processing. The above system shows a grid computing system that is distributed among the various local domains.
Working:
A grid user have to installed the provided grid s/w on his m/c .m/c is connected with Internet. Internet is most far reaching n/w. The user establishes his identity with a certificate authority. The user has responsibility of keeping his grid secure. Once the user and/or machine are authenticated, the grid software provided to the user for installing on his machine for the purposes of using the grid as well as donating to the grid. This software may be automatically reconfigured by the grid management system to know the communication address of the management nodes in the grid and user or machine identification information. In this way, the installation may be a one click operation. To use the grid, most grid systems require the user to log on to a system using a user ID that is enrolled in the grid. Once logged on, the user can query the grid and submit jobs. The user will usually perform some queries to check to see how busy the grid is, to see how his submitted jobs are progressing, and to look for resources on the grid. Grid systems usually provide command line tools as well as graphical user interfaces (GUIs) for queries. Command line tools are especially useful when the user wants to write a script.
Job submission usually consists of three parts, even if there is only one command required. First, some input data and possibly the executable program or execution script file are sent to the machine to execute the job. Sending the input is called “staging the input data.” Second, the job is executed on the grid machine. The grid software running on the donating machine executes the program in a process on the user’s behalf. Third, the results of the job are sent back to the submitter. When there are a large number of sub jobs, the work required to collect the results and produce the final result is usually accomplished by a single program, usually running on the machine at the point of job submission. The data accessed by the grid jobs may simply be staged in and out by the grid system. Depending on size and no.of jobs, this can be added up to a large amount of data traffic. The user can query the grid system to see how his application and its sub jobs are Progressing.
A job may fail due to a:
1. Programming error: The job stops part way with some program fault.
2. Hardware or power failure: The machine or devices being used stop Working in some way.
3. Communications interruption: A communication path to the machine has
Failed or is overloaded with other data traffic.
3. Excessive slowness: The job might be in an infinite loop or normal job
Progress may be limited by another process running at a higher priority or some other form of contention. Grid applications can be designed to automate the monitoring and recovery of their own sub jobs using functions provided by the grid system software application programming interfaces (APIs).
Grid computing harnesses a diverse array of machines and other resources to rapidly process and solve problems beyond an organization’s available capacity. Academic and government researchers have used it for several years to solve large-scale problems, and the private sector is increasingly adopting the technology. To create innovative products and services, reduce time to market, and enhance Business processes.
Fig .1.Aset of methods describes the connectivity of the original problem cell (opc)
APPLICATION OF GRID COMPUTING:
The grid computing is used to solve the problems which are beyond the scope of single processor, the problems involving the large amount of computations or the analysis of huge amount of data. Right now there are scientific and technical projects such as cancer and other medical research projects that involve the analysis of the inordinate amount of data. Now a days grid computing is used by the sites which are the hosts o the large online games. There are many users on the Internet playing a large online game; there is information of the virtual organization of all the players. Grids are primarily being used today by universities and research lab for project that require high performance computing applications. These projects require a large amount of computer processing power or access to large amount of data.
TYPES OF GRID:-
COMUTATION GRID:
A computational grid is focused on settings aside resources specifically for computing power .In this type of grid most of machines are high performance servers.
SCAVENGING GRID:
A scavenging grid is most commonly used with large numbers of desktop machines. Machines are scavenged for available CPU cycles and other resources.
Owners of desktop machines are usually given control over when their resources are available to participate in the grid.
DATA GRID:
A data grid is responsible for housing and providing access to data across multiple organizations. Users are not concerned with where this data is located as long as they access to the data .A data grid allow to share data, manage the data and manage security.
GLOBUS PROJECT:
The Globus project is a joint effort on the part of researchers and developers from around the word that are focused on the concept of grid computing its organized around four main activities:-
1. Research
2. Software tools
3. Test beds
4. Applications
BENEFITS OF GRID COMPUTING
BUSSINESS BENEFITS:
ACCELERATE TIME TO RESULT
• Accelerate time to results:
• Can help improve productivity and collaboration.
• Can help solve problems that were previously unsolvable.
ENABLE COLLABORATION AND PROMOTE OPERATIONAL FLEXIBILITY
• Bring together not only IT resources but also people.
• How widely dispersed departments and businesses to create virtual
• Organizations to share data and resources.
EFFICIENTLY SCALE TO MEET VARIABLE BUSINESS DEMANDS
• Create flexible, resilient operational infrastructures.
• Address rapid fluctuations in customer demands needs.
• Instantaneously access compute and data resources to "sense and Respond" to Needs.
INCREASE PRODUCTIVITY:
• Can help give end-users uninhibited access to the computing, data and storage resources they need (when they need them) .
• Can help equip employees to move easily through product dies phases, research Projects and faster than ever.
• Can help you improve optimal utilization of computing capabilities.
• Can help you avoid common pitfalls of over-provisioning and incurring excess costs.
• Can free IT organizations from the burden of administering disparate, Non-integrated systems.
TECHNOLOGY BENIFITS:-
INFRASTRUCTURE OPTIMIZATION:
• Consolidate workload management.
• Reduce cycle times.
INCREASE ACCESS TO DATA AND COLLABORATION:
• Federate data and distribute it globally.
• Support large multi-disciplinary collaboration..
• Enable collaboration across organizations and among businesses.
RESILIENT, HIGHLY AVAILABLE INFRASTRUCTURE:
• Balance work loads.
• Foster business community
• Enable recovery and failure.
CAPABILITY OF GRID COMPUTING:
1. EXPLOITING UNDERUTILIZED RESOURCES:
The easiest use of grid computing is to run an existing application on a different machine. The processing resources are not the only ones that may be underutilized. Often, machines may have enormous unused disk drive capacity. Grid computing, more specifically, a data can be used to aggregate this unused storage into a much larger virtual data store, possibly configured to achieve improved performance and reliability over that of any single another function of the grid is to better balance resource utilization.
2. VIRTUAL RESOURCES AND VIRTUAL ORGANIZATION FOR COLLABORATION:
Another important grid computing contribution is to enable and simplify collaboration among a wider audience. Grid computing takes these capabilities to an even wider audience, while offering important standards that enable very heterogeneous systems to work together to form the image of a large virtual computing system offering a variety of virtual resources, The users of the grid can be organized dynamically into a number of virtual organizations, each with different policy requirements. These virtual organizations can share their resources collectively as a larger grid.
3. ACCESS TO ADDITIONAL RESOURCES:
In addition to CPU and storage resources, a grid can provide access to increased quantities of other resources and to special equipment, software, licenses, and other services. The additional resources can be provided in additional numbers and/or capacity.
4. RESOURCE BALANCING:
A grid federates a large number of resources contributed by individual machines into a greater total virtual resource. For applications that are grid enabled, the grid can offer a resource balancing effect by scheduling grid jobs on machines with low utilization. This feature can prove invaluable for handling occasional peak loads of activity in parts of a larger organization. This can happen in two ways: An unexpected peak can be routed to relatively idle machines in the grid. If the grid is already fully utilized, the lowest priority work being performed on the grid can be temporarily suspended or even cancelled and performed again later to make room for the higher priority work.
Without a grid infrastructure, such balancing decisions are difficult to prioritize and execute.
5. MANAGEMENT:
The goal to virtualize the resources on the grid and more uniformly handle heterogeneous systems will create new opportunities to better manage a larger, more disperse IT infrastructure. It will be easier to visualize capacity and utilization, making it easier for IT departments to control expenditures computing resources over a larger organization. The grid offers management of priorities among different projects.
USING A GRID: AN APPLICATION DEVELOPERS PERSPECTIVE:
Grid applications can be categorized in one of the following three categories:
Application that are not enabled for using multiple processors but can be executed on different machines.
Applications that are already designed to use the multiple processors of a grid setting.
Applications that need to be modified or rewritten to better exploit grid.
In recent years, numerous organizations have been vying for donated resources for their grid applications. Potential resource donors are inundated with worth- while grid projects such as discovering a cure for AIDS, finding large prime numbers, and searching for extraterrestrial intelligence. We believe that fundamental to the establishment of a grid computing framework where all (not just large organizations) are able to effectively tap into the resources available on the global network is the establishment of trust between grid application developers and resource donors. Resource donors must be able to trust that their security, safety, and privacy policies will be respected by programs that use their systems.
The purpose of this seminar to give the basic overview of Grid computing, in such way that reader will able to understand basic concept of grid computing, principal operation and some of the issues of Grid computing.
Grid computing enables the use and pooling of computer and data resources to solve complex mathematical problems. The technique is the latest development in an evolution that earlier brought forth such advances as distributed computing, the Worldwide Web, and collaborative computing.
GRID COMPUTING:-
Grid computing is form of networking unlike conventional network that focus on communications among devices. It harnesses unused processing cycles of all computers in a network for solving problems too intensive for any stand-alone machine.
Grid computing is a method of harnessing the power of many computers in network to solve problems requiring a large numbers of processing cycles and involving huge amount of data in grid computing pcs, servers and workstations are linked together so that computing capacity is never wasted.
So rather than using a network of computers simply to communicate and transfer data, grid computing taps the unused processor cycles or numerous i.e thousands of computers. It is distributed computing taken to the next evolutionary level .The goal of grid computing is to create the illusion of a simple yet large and powerful self managing virtual computer out of large collection of connected heterogeneous system sharing various combination of resources grid . Computing is a way to enlist large no of machines to work on multipart computational problem such as circuit analysis or mechanical design. It harnesses a diverse array of machines and other resources to rapidelly. Process to solve problem beyond an organization's available capacity. Once a proper infrastructure is in place, a user will have access to a virtual computer that is reliable and adaptable to the users, for this, there must be standard for grid computing that will allow a secure and robust infrastructure to be built. Standards such as Open Grid Services Architecture (OGSA) and tools such as provided by Globus Toolkit provide the necessary framework. Grid computing uses open source protocol and software called Globus. Globus software allows computes to share data, power and software.
BASIC CONCEPT OF GRID COMPUTING
HOW IT WORKS?
The computer is tied to network such as internet, which enables regular people with home pcs to participate in the grid project from anywhere in the world. The pc owners have to download simple software from the projects host site. And the project sites use the software that can divide and distribute the pises of program to thousands of computers for processing. The above system shows a grid computing system that is distributed among the various local domains.
Working:
A grid user have to installed the provided grid s/w on his m/c .m/c is connected with Internet. Internet is most far reaching n/w. The user establishes his identity with a certificate authority. The user has responsibility of keeping his grid secure. Once the user and/or machine are authenticated, the grid software provided to the user for installing on his machine for the purposes of using the grid as well as donating to the grid. This software may be automatically reconfigured by the grid management system to know the communication address of the management nodes in the grid and user or machine identification information. In this way, the installation may be a one click operation. To use the grid, most grid systems require the user to log on to a system using a user ID that is enrolled in the grid. Once logged on, the user can query the grid and submit jobs. The user will usually perform some queries to check to see how busy the grid is, to see how his submitted jobs are progressing, and to look for resources on the grid. Grid systems usually provide command line tools as well as graphical user interfaces (GUIs) for queries. Command line tools are especially useful when the user wants to write a script.
Job submission usually consists of three parts, even if there is only one command required. First, some input data and possibly the executable program or execution script file are sent to the machine to execute the job. Sending the input is called “staging the input data.” Second, the job is executed on the grid machine. The grid software running on the donating machine executes the program in a process on the user’s behalf. Third, the results of the job are sent back to the submitter. When there are a large number of sub jobs, the work required to collect the results and produce the final result is usually accomplished by a single program, usually running on the machine at the point of job submission. The data accessed by the grid jobs may simply be staged in and out by the grid system. Depending on size and no.of jobs, this can be added up to a large amount of data traffic. The user can query the grid system to see how his application and its sub jobs are Progressing.
A job may fail due to a:
1. Programming error: The job stops part way with some program fault.
2. Hardware or power failure: The machine or devices being used stop Working in some way.
3. Communications interruption: A communication path to the machine has
Failed or is overloaded with other data traffic.
3. Excessive slowness: The job might be in an infinite loop or normal job
Progress may be limited by another process running at a higher priority or some other form of contention. Grid applications can be designed to automate the monitoring and recovery of their own sub jobs using functions provided by the grid system software application programming interfaces (APIs).
Grid computing harnesses a diverse array of machines and other resources to rapidly process and solve problems beyond an organization’s available capacity. Academic and government researchers have used it for several years to solve large-scale problems, and the private sector is increasingly adopting the technology. To create innovative products and services, reduce time to market, and enhance Business processes.
Fig .1.Aset of methods describes the connectivity of the original problem cell (opc)
APPLICATION OF GRID COMPUTING:
The grid computing is used to solve the problems which are beyond the scope of single processor, the problems involving the large amount of computations or the analysis of huge amount of data. Right now there are scientific and technical projects such as cancer and other medical research projects that involve the analysis of the inordinate amount of data. Now a days grid computing is used by the sites which are the hosts o the large online games. There are many users on the Internet playing a large online game; there is information of the virtual organization of all the players. Grids are primarily being used today by universities and research lab for project that require high performance computing applications. These projects require a large amount of computer processing power or access to large amount of data.
TYPES OF GRID:-
COMUTATION GRID:
A computational grid is focused on settings aside resources specifically for computing power .In this type of grid most of machines are high performance servers.
SCAVENGING GRID:
A scavenging grid is most commonly used with large numbers of desktop machines. Machines are scavenged for available CPU cycles and other resources.
Owners of desktop machines are usually given control over when their resources are available to participate in the grid.
DATA GRID:
A data grid is responsible for housing and providing access to data across multiple organizations. Users are not concerned with where this data is located as long as they access to the data .A data grid allow to share data, manage the data and manage security.
GLOBUS PROJECT:
The Globus project is a joint effort on the part of researchers and developers from around the word that are focused on the concept of grid computing its organized around four main activities:-
1. Research
2. Software tools
3. Test beds
4. Applications
BENEFITS OF GRID COMPUTING
BUSSINESS BENEFITS:
ACCELERATE TIME TO RESULT
• Accelerate time to results:
• Can help improve productivity and collaboration.
• Can help solve problems that were previously unsolvable.
ENABLE COLLABORATION AND PROMOTE OPERATIONAL FLEXIBILITY
• Bring together not only IT resources but also people.
• How widely dispersed departments and businesses to create virtual
• Organizations to share data and resources.
EFFICIENTLY SCALE TO MEET VARIABLE BUSINESS DEMANDS
• Create flexible, resilient operational infrastructures.
• Address rapid fluctuations in customer demands needs.
• Instantaneously access compute and data resources to "sense and Respond" to Needs.
INCREASE PRODUCTIVITY:
• Can help give end-users uninhibited access to the computing, data and storage resources they need (when they need them) .
• Can help equip employees to move easily through product dies phases, research Projects and faster than ever.
• Can help you improve optimal utilization of computing capabilities.
• Can help you avoid common pitfalls of over-provisioning and incurring excess costs.
• Can free IT organizations from the burden of administering disparate, Non-integrated systems.
TECHNOLOGY BENIFITS:-
INFRASTRUCTURE OPTIMIZATION:
• Consolidate workload management.
• Reduce cycle times.
INCREASE ACCESS TO DATA AND COLLABORATION:
• Federate data and distribute it globally.
• Support large multi-disciplinary collaboration..
• Enable collaboration across organizations and among businesses.
RESILIENT, HIGHLY AVAILABLE INFRASTRUCTURE:
• Balance work loads.
• Foster business community
• Enable recovery and failure.
CAPABILITY OF GRID COMPUTING:
1. EXPLOITING UNDERUTILIZED RESOURCES:
The easiest use of grid computing is to run an existing application on a different machine. The processing resources are not the only ones that may be underutilized. Often, machines may have enormous unused disk drive capacity. Grid computing, more specifically, a data can be used to aggregate this unused storage into a much larger virtual data store, possibly configured to achieve improved performance and reliability over that of any single another function of the grid is to better balance resource utilization.
2. VIRTUAL RESOURCES AND VIRTUAL ORGANIZATION FOR COLLABORATION:
Another important grid computing contribution is to enable and simplify collaboration among a wider audience. Grid computing takes these capabilities to an even wider audience, while offering important standards that enable very heterogeneous systems to work together to form the image of a large virtual computing system offering a variety of virtual resources, The users of the grid can be organized dynamically into a number of virtual organizations, each with different policy requirements. These virtual organizations can share their resources collectively as a larger grid.
3. ACCESS TO ADDITIONAL RESOURCES:
In addition to CPU and storage resources, a grid can provide access to increased quantities of other resources and to special equipment, software, licenses, and other services. The additional resources can be provided in additional numbers and/or capacity.
4. RESOURCE BALANCING:
A grid federates a large number of resources contributed by individual machines into a greater total virtual resource. For applications that are grid enabled, the grid can offer a resource balancing effect by scheduling grid jobs on machines with low utilization. This feature can prove invaluable for handling occasional peak loads of activity in parts of a larger organization. This can happen in two ways: An unexpected peak can be routed to relatively idle machines in the grid. If the grid is already fully utilized, the lowest priority work being performed on the grid can be temporarily suspended or even cancelled and performed again later to make room for the higher priority work.
Without a grid infrastructure, such balancing decisions are difficult to prioritize and execute.
5. MANAGEMENT:
The goal to virtualize the resources on the grid and more uniformly handle heterogeneous systems will create new opportunities to better manage a larger, more disperse IT infrastructure. It will be easier to visualize capacity and utilization, making it easier for IT departments to control expenditures computing resources over a larger organization. The grid offers management of priorities among different projects.
USING A GRID: AN APPLICATION DEVELOPERS PERSPECTIVE:
Grid applications can be categorized in one of the following three categories:
Application that are not enabled for using multiple processors but can be executed on different machines.
Applications that are already designed to use the multiple processors of a grid setting.
Applications that need to be modified or rewritten to better exploit grid.
STEGANOGRAPHY AND DIGITAL WATERMARKING
INDEX:
• Abstract
• Introduction
1.Steganography
2.Digitalwatermarking
• Steganographic techniques
1.Modern Steganographic techniques
2.Historical Steganographic techniques
• Digital watermarking as an application of Steganography
• What is a watermarking
• Classification
1.Visible watermarking
2.Invisible watermarking
• Differences between visible and invisible water marking
• Characteristics of digital watermarking
• Attacks due to multiple watermarking
• Watermarking an image
• Differences between Steganography and Digital water marking
• Conclusion
• References
1. ABSTRACT:
This paper attempts to give a brief overview of Steganography & digital water marking in general. Emphasis is made to expose the different techniques that can be carried out on Steganography. It also gives a brief description of Digital watermarking and its characteristics and finally concludes with the differences between Digital watermarking of imagesn pictures and Steganography .Steganography means hidden or invisible messages.Its been one of the powerful techniques over the past few decades which provides security from the illegal access.
For network distribution services of copyrighted digital data (such as pay web distribution of musics, or digital libraries), the possibility of illegal redistribution due to some licensed user, who obtained the data in a legal way from the server, should be considered. Such actions cannot be prevented by use of encrypted communication only. To prevent the illegal copying itself is not realistic, because digital data can be in general copied easily, without decreasing its quality. An alternative solution investigated recently is ``digital watermarking'', which is a technology to embed some auxiliary information into digital data.
2. INTRODUCTION
What is Steganography?
The word " Steganography " is of Greek origin and means "covered, or hidden writing" . Its ancient origins can be traced back to 440 BC. Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message; this is in contrast to cryptography, where the existence of the message itself is not disguised, but the content is obscured a steganographic message will appear to be something else: a picture, an article, a shopping list, or some other message. This apparent message is the covertext . For instance, a message may be hidden by using invisible ink between the visible lines of innocuous documents.
What is Digital watermarking?
Digital watermarking can be a form of steganography, in which data is hidden in the message without the end user's knowledge.It is a technique which allows an individual to add hidden copyright notices or other verification messages to digital audio, video, or image signals and documents. Such a message is a group of bits describing information pertaining to the signal or to the author of the signal (name, place, etc.) The technique takes its name from watermarking of paper or money as a security measure.
3. Steganographic techniques:
An in depth analysis of the modern and historical steganographic techniques
• Modern steganogrpahic techniques:
• Concealing messages within the lowest bits of noisy images or sound files.
• Concealing data within encrypted data
• Chaffing and winnowing
• Invisible ink
• Null ciphers
• Concealed messages in tampered executable files
• Embedded pictures in video material
• A new steganographic technique involves injecting imperceptible delays to packets sent over the network from the keyboard
• Content-Aware Steganography hides information in the semantics a human user assigns a datagram; these systems offer security against a non-human adversary/warden
• Historical steganographic techniques:
• Hidden messages in wax tablets
• Hidden messages on messenger's body
• Hidden messages on paper written in secret inks under other messages or on the blank parts of other messages
• During and after World War II, espionage agents used photographically produced microdots to send information back and forth
• Counter-propaganda:
• The one-time pad is a theoretically unbreakable cipher that produces ciphertexts indistinguishable from random texts: only those who have the private key can distinguish these ciphertexts from any other perfectly random texts. Thus, any perfectly random data can be used as a covertext for a theoretically unbreakable steganography
4. Digital water marking an application of Steganography:
Steganography is used by some modern printers, including HP and Xerox brand color laser printers. Tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps. Steganography can be used for digital watermarking, where a message (being simply an identifier) is hidden in an image.
Steganograhy original
Steganogarphy recovered
Digital water marking is a technique which allows an individual to add hidden copyright notices or other verification messages to digital audio, video, or image signals and documents o that its source can be tracked or verified a watermarking can be classified into two sub-types visible and invisible.
5. What is ``watermarking'' ?
The process of embedding information into another object/signal can be termed as watermarking. it is mostly agreed that the watermark is one, which is imperceptibly added to the cover-signal in order to convey the hidden data. The digital age has simplified the process of content delivery and has increased the ease at which the buyer can re-distrubute the content, thus denying the income to the seller. Images published on the internet is an example of such content., one can identify an upper limit on the safe message size that can be embedded in a "typical" cover. This is called steganographic capacity and it is unknown even for the simplest methods out there, such as the LSB embedding.
Another application is to protect digital media by fingerprinting each copy with the purchaser's information. If the purchaser makes illegitimate copies, these will contain his name. Fingerprints are an extension to watermarking principle and can be both visible and invisible.
6. Classification :
Digital watermarking can be classified as visible and invisible watermarking
1.Visible water marking:
Visible watermarks change the signal altogether such that the watermarked signal is totally different from the actual signal, e.g., adding an image as a watermark to another image.
Visible watermarks can be used in following cases :
• Visible watermarking for enhanced copyright protection.
• Visible watermarking used to indicate ownership originals.
Visible water marking- the text “Brian kell 2006”
Can be seen at the centre of Image.
2.Invisible water marking:
Invisible watermarks do not change the signal to a perceptually great extent, i.e., there are only minor variations in the output signal.
An example of an invisible watermark is when some bits are added to an image modifying only its least significant bits. Invisible watermarks that are unknown to the end user are steganographic. While the addition of the hidden message to the signal does not restrict that signal's use, it provides a mechanism to track the signal to the original owner.
7. Differences between visible and invisible water marking:
Visibility is a term associated with the perception of the human eye. A watermarked image in which the watermark is imperceptible, or the watermarked image is visually identical to its orginal constitutes a invisible watermarking. Examples include images distrubuted over internet with watermarks embedded in them for copyright protection. Those which fail can be classified as visible watermarks. Examples include logos used in papers in currencies
8. Characteristics of digital watermarking:
The characteristics of an watermarking algorithm is normally tied to the application is was designed for. The following merely explain the words used in the context of watermarking.
• Imperceptibility
In watermarking, we traditionally seek high fidelity, i.e. the watermarked work must look or sound like the original. Whether or not this is a good goal is a different discussion.
• Robustness
It is more a property and not a requirement of watermarking. The watermark should be able to survive any resonable processing inflicted on the carrier (carrier here refers to the content being watermarked).
• Security
The watermarked image should not reveal any clues of the presence of the watermarks, with respect to un-authorized detection, or (statistical) undetectability or unsuspicious (not the same as imperceptability).
9. Attacks due to multiple watermarking:
Multiple watermarks can be considered as attacks in situations wherein one expects the presence of single watermark. Thus, any second operation of watermark embedding or any other processing on the carrier can be considered as an attack. The survival of the watermark in those cases is dependent on the application. A robust watermark is expected to survive such operations. Some watermarking tools do not allow you to insert a watermark if an image already contains a watermark from the same tool. Sometimes, a watermark from one tool may get overwritten with a watermark from another.
There are instances where, a carrier is intentionally watermarked multiple times. In cases of multiple watermarks, the order in which different watermarks are embedded may influence the detectability. A strong watermark embedded after a weak watermark will mask the weak watermark and render it undetectable
10. Digital Watermarking an Image
In this case, the server embeds certain identificationinformation of each user into the data before distributing (in this case, this technique is called ``fingerprinting''). Then, when the server finds the illegal copy, embedded information enables him to detect the guilty user. However, if the server embeds user IDs (or its numerical expressions) simply, two or more colluded users can recognize easily the position of embedded information by comparing their data legally obtained. As a consequence, embedded information may beerased by the colluded users. Furthermore, it may be also possible even to forge the data which contain ``identification information of other innocent user''.For the purpose of coping with such illegal actions by colluded users, we have been investigating construction of suitable embedded information, namely conversion method (encoding) from user IDs to embedded information.
• How to watermark an image?
Visible watermarks on images can be easily achieved thorough image editing software. Ex. imagemagick or any other, which have the watermark functionality. Invisible watermarks on images can be achieved through some properitary softwares.
• Getting Pixel values in Image to watermark them
First determine what is the format of the image you are dealing with. Then search for libraries which can decode/read the images and provide pixel values. Tools like MATLAB can be helpful here. Another option would be to write plugins for image editing applications like image-magick.
11. Differences between Steganography and Digital watermarking:
steganography is about concealing their very existence. It comes from Greek roots, literally means 'covered writing', and is usually interpreted to mean hiding information in other information. Examples include sending a message to a spy by marking certain letters in a newspaper using invisible ink, and adding sub-perceptible echo at certain places in an audio recording. It is often thought that communications may be secured by encrypting the traffic, but this has rarely been adequate in practice. Ãneas the Tactician, and other classical writers
As the purpose of steganography is having a covert communication between two parties whose existence is unknown to a possible attacker, a successful attack consists in detecting the existence of this communication (e.g., using statistical analysis of images with and without hidden information). Watermarking , as opposed to steganography, has the (additional) requirement of robustness against possible attacks. In this context, the term 'robustness' is still not very clear; it mainly depends on the application. Copyright marks do not always need to be hidden, as some systems use visible digital watermarks , but most of the literature has focused on imperceptible (e.g., invisible, inaudible) digital watermarks which have wider applications. Visible digital watermarks are strongly linked to the original paper watermarks which appeared at the end of the XIII century to differentiate paper makers of that time. Modern visible watermarks may be visual patterns
12. Conclusion:
Steganography used in electronic communication include teganographic coding inside of a transport layer, such as an MP3 file, or a protocol, such as UDP.
Steganography is used by some modern printers, including HP and Xerox brand color laser printers. Tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps
Watermarking (now-a-days) is mainly used for copy-protection and copyright-protection Historically, watermarking has been used to send ``sensitive'' information hidden in another signal. Watermarking has its applications in image/video copyright protection.
Copy protection attempts to find ways, which limits the access to copyrighted material and/or inhibit the copy process itself. Examples of copy protection include encrypted digital TV broadcast, access controls to copyrighted software through the use of license servers and technical copy protection mechanisms on the media. A recent example is the copy protection mechanism on DVDs. However, copy protection is very difficult to achieve in open systems, as recent incidents (like the DVD hack - DeCss) show.
Copyright protection inserts copyright information into the digital object without the loss of quality. Whenever the copyright of a digital object is in question, this information is extracted to identify the rightful owner. It is also possible to encode the identity of the original buyer along with the identity of the copyright holder, which allows tracing of any unauthorized copies. The most prominent way of embedding information in multimedia data is the use of digital watermarking.
13. References:
• www.wikipedia.com
• www.whatIs.com
• www.digitalwatermarkingworld.com
• www.digitalwatermarkingalliance.com
• www.watermarker.com
• www.jjtc.com
• Internet- ebooks
• Abstract
• Introduction
1.Steganography
2.Digitalwatermarking
• Steganographic techniques
1.Modern Steganographic techniques
2.Historical Steganographic techniques
• Digital watermarking as an application of Steganography
• What is a watermarking
• Classification
1.Visible watermarking
2.Invisible watermarking
• Differences between visible and invisible water marking
• Characteristics of digital watermarking
• Attacks due to multiple watermarking
• Watermarking an image
• Differences between Steganography and Digital water marking
• Conclusion
• References
1. ABSTRACT:
This paper attempts to give a brief overview of Steganography & digital water marking in general. Emphasis is made to expose the different techniques that can be carried out on Steganography. It also gives a brief description of Digital watermarking and its characteristics and finally concludes with the differences between Digital watermarking of imagesn pictures and Steganography .Steganography means hidden or invisible messages.Its been one of the powerful techniques over the past few decades which provides security from the illegal access.
For network distribution services of copyrighted digital data (such as pay web distribution of musics, or digital libraries), the possibility of illegal redistribution due to some licensed user, who obtained the data in a legal way from the server, should be considered. Such actions cannot be prevented by use of encrypted communication only. To prevent the illegal copying itself is not realistic, because digital data can be in general copied easily, without decreasing its quality. An alternative solution investigated recently is ``digital watermarking'', which is a technology to embed some auxiliary information into digital data.
2. INTRODUCTION
What is Steganography?
The word " Steganography " is of Greek origin and means "covered, or hidden writing" . Its ancient origins can be traced back to 440 BC. Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message; this is in contrast to cryptography, where the existence of the message itself is not disguised, but the content is obscured a steganographic message will appear to be something else: a picture, an article, a shopping list, or some other message. This apparent message is the covertext . For instance, a message may be hidden by using invisible ink between the visible lines of innocuous documents.
What is Digital watermarking?
Digital watermarking can be a form of steganography, in which data is hidden in the message without the end user's knowledge.It is a technique which allows an individual to add hidden copyright notices or other verification messages to digital audio, video, or image signals and documents. Such a message is a group of bits describing information pertaining to the signal or to the author of the signal (name, place, etc.) The technique takes its name from watermarking of paper or money as a security measure.
3. Steganographic techniques:
An in depth analysis of the modern and historical steganographic techniques
• Modern steganogrpahic techniques:
• Concealing messages within the lowest bits of noisy images or sound files.
• Concealing data within encrypted data
• Chaffing and winnowing
• Invisible ink
• Null ciphers
• Concealed messages in tampered executable files
• Embedded pictures in video material
• A new steganographic technique involves injecting imperceptible delays to packets sent over the network from the keyboard
• Content-Aware Steganography hides information in the semantics a human user assigns a datagram; these systems offer security against a non-human adversary/warden
• Historical steganographic techniques:
• Hidden messages in wax tablets
• Hidden messages on messenger's body
• Hidden messages on paper written in secret inks under other messages or on the blank parts of other messages
• During and after World War II, espionage agents used photographically produced microdots to send information back and forth
• Counter-propaganda:
• The one-time pad is a theoretically unbreakable cipher that produces ciphertexts indistinguishable from random texts: only those who have the private key can distinguish these ciphertexts from any other perfectly random texts. Thus, any perfectly random data can be used as a covertext for a theoretically unbreakable steganography
4. Digital water marking an application of Steganography:
Steganography is used by some modern printers, including HP and Xerox brand color laser printers. Tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps. Steganography can be used for digital watermarking, where a message (being simply an identifier) is hidden in an image.
Steganograhy original
Steganogarphy recovered
Digital water marking is a technique which allows an individual to add hidden copyright notices or other verification messages to digital audio, video, or image signals and documents o that its source can be tracked or verified a watermarking can be classified into two sub-types visible and invisible.
5. What is ``watermarking'' ?
The process of embedding information into another object/signal can be termed as watermarking. it is mostly agreed that the watermark is one, which is imperceptibly added to the cover-signal in order to convey the hidden data. The digital age has simplified the process of content delivery and has increased the ease at which the buyer can re-distrubute the content, thus denying the income to the seller. Images published on the internet is an example of such content., one can identify an upper limit on the safe message size that can be embedded in a "typical" cover. This is called steganographic capacity and it is unknown even for the simplest methods out there, such as the LSB embedding.
Another application is to protect digital media by fingerprinting each copy with the purchaser's information. If the purchaser makes illegitimate copies, these will contain his name. Fingerprints are an extension to watermarking principle and can be both visible and invisible.
6. Classification :
Digital watermarking can be classified as visible and invisible watermarking
1.Visible water marking:
Visible watermarks change the signal altogether such that the watermarked signal is totally different from the actual signal, e.g., adding an image as a watermark to another image.
Visible watermarks can be used in following cases :
• Visible watermarking for enhanced copyright protection.
• Visible watermarking used to indicate ownership originals.
Visible water marking- the text “Brian kell 2006”
Can be seen at the centre of Image.
2.Invisible water marking:
Invisible watermarks do not change the signal to a perceptually great extent, i.e., there are only minor variations in the output signal.
An example of an invisible watermark is when some bits are added to an image modifying only its least significant bits. Invisible watermarks that are unknown to the end user are steganographic. While the addition of the hidden message to the signal does not restrict that signal's use, it provides a mechanism to track the signal to the original owner.
7. Differences between visible and invisible water marking:
Visibility is a term associated with the perception of the human eye. A watermarked image in which the watermark is imperceptible, or the watermarked image is visually identical to its orginal constitutes a invisible watermarking. Examples include images distrubuted over internet with watermarks embedded in them for copyright protection. Those which fail can be classified as visible watermarks. Examples include logos used in papers in currencies
8. Characteristics of digital watermarking:
The characteristics of an watermarking algorithm is normally tied to the application is was designed for. The following merely explain the words used in the context of watermarking.
• Imperceptibility
In watermarking, we traditionally seek high fidelity, i.e. the watermarked work must look or sound like the original. Whether or not this is a good goal is a different discussion.
• Robustness
It is more a property and not a requirement of watermarking. The watermark should be able to survive any resonable processing inflicted on the carrier (carrier here refers to the content being watermarked).
• Security
The watermarked image should not reveal any clues of the presence of the watermarks, with respect to un-authorized detection, or (statistical) undetectability or unsuspicious (not the same as imperceptability).
9. Attacks due to multiple watermarking:
Multiple watermarks can be considered as attacks in situations wherein one expects the presence of single watermark. Thus, any second operation of watermark embedding or any other processing on the carrier can be considered as an attack. The survival of the watermark in those cases is dependent on the application. A robust watermark is expected to survive such operations. Some watermarking tools do not allow you to insert a watermark if an image already contains a watermark from the same tool. Sometimes, a watermark from one tool may get overwritten with a watermark from another.
There are instances where, a carrier is intentionally watermarked multiple times. In cases of multiple watermarks, the order in which different watermarks are embedded may influence the detectability. A strong watermark embedded after a weak watermark will mask the weak watermark and render it undetectable
10. Digital Watermarking an Image
In this case, the server embeds certain identificationinformation of each user into the data before distributing (in this case, this technique is called ``fingerprinting''). Then, when the server finds the illegal copy, embedded information enables him to detect the guilty user. However, if the server embeds user IDs (or its numerical expressions) simply, two or more colluded users can recognize easily the position of embedded information by comparing their data legally obtained. As a consequence, embedded information may beerased by the colluded users. Furthermore, it may be also possible even to forge the data which contain ``identification information of other innocent user''.For the purpose of coping with such illegal actions by colluded users, we have been investigating construction of suitable embedded information, namely conversion method (encoding) from user IDs to embedded information.
• How to watermark an image?
Visible watermarks on images can be easily achieved thorough image editing software. Ex. imagemagick or any other, which have the watermark functionality. Invisible watermarks on images can be achieved through some properitary softwares.
• Getting Pixel values in Image to watermark them
First determine what is the format of the image you are dealing with. Then search for libraries which can decode/read the images and provide pixel values. Tools like MATLAB can be helpful here. Another option would be to write plugins for image editing applications like image-magick.
11. Differences between Steganography and Digital watermarking:
steganography is about concealing their very existence. It comes from Greek roots, literally means 'covered writing', and is usually interpreted to mean hiding information in other information. Examples include sending a message to a spy by marking certain letters in a newspaper using invisible ink, and adding sub-perceptible echo at certain places in an audio recording. It is often thought that communications may be secured by encrypting the traffic, but this has rarely been adequate in practice. Ãneas the Tactician, and other classical writers
As the purpose of steganography is having a covert communication between two parties whose existence is unknown to a possible attacker, a successful attack consists in detecting the existence of this communication (e.g., using statistical analysis of images with and without hidden information). Watermarking , as opposed to steganography, has the (additional) requirement of robustness against possible attacks. In this context, the term 'robustness' is still not very clear; it mainly depends on the application. Copyright marks do not always need to be hidden, as some systems use visible digital watermarks , but most of the literature has focused on imperceptible (e.g., invisible, inaudible) digital watermarks which have wider applications. Visible digital watermarks are strongly linked to the original paper watermarks which appeared at the end of the XIII century to differentiate paper makers of that time. Modern visible watermarks may be visual patterns
12. Conclusion:
Steganography used in electronic communication include teganographic coding inside of a transport layer, such as an MP3 file, or a protocol, such as UDP.
Steganography is used by some modern printers, including HP and Xerox brand color laser printers. Tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps
Watermarking (now-a-days) is mainly used for copy-protection and copyright-protection Historically, watermarking has been used to send ``sensitive'' information hidden in another signal. Watermarking has its applications in image/video copyright protection.
Copy protection attempts to find ways, which limits the access to copyrighted material and/or inhibit the copy process itself. Examples of copy protection include encrypted digital TV broadcast, access controls to copyrighted software through the use of license servers and technical copy protection mechanisms on the media. A recent example is the copy protection mechanism on DVDs. However, copy protection is very difficult to achieve in open systems, as recent incidents (like the DVD hack - DeCss) show.
Copyright protection inserts copyright information into the digital object without the loss of quality. Whenever the copyright of a digital object is in question, this information is extracted to identify the rightful owner. It is also possible to encode the identity of the original buyer along with the identity of the copyright holder, which allows tracing of any unauthorized copies. The most prominent way of embedding information in multimedia data is the use of digital watermarking.
13. References:
• www.wikipedia.com
• www.whatIs.com
• www.digitalwatermarkingworld.com
• www.digitalwatermarkingalliance.com
• www.watermarker.com
• www.jjtc.com
• Internet- ebooks
nanoelectronics single electron transistor
ABSTRACT
The shotgun marriage of chemistry and engineering called “Nanotechnology” is ushering in the era of self-replicating machinery and self-assembling consumer goods made from raw atoms. Utilizing the well understood chemical properties of atoms & molecules, nanotechnology proposes the construction of novel molecular devices possessing extraordinary properties. The single electron transistor or SET is a new type of switching device that uses controlled electron tunneling to amplify current.
By using the “Electron beam lithography” and “Electromigration”, the research leads to the designing of a single atom transistor with the help of the meticulously synthesized semiconductor crystals called “quantum dots”, which embodies the electrons confined in a channel and resembles same in its properties as an real atom.
This paper presents a scenario on existing and ongoing studies on NANO ELECTRONICS with the theoretical methods relevant to their understanding. Most of the preceding discussion is premised upon the implicit assumption. That future quantum effect Nano Electronic Devices will be fabricated in Nano Metre scale using molecules. Conductance quantization in ballistic regime has been described under various conditions. The behaviour of “Coulomb Island” through which the electrons can only enter by tunneling through one of the insulators is presented.
At last, the SET presents that it is the different construction is which is based on helical logic, atomic scale motion of electrons in an applied rotating electric field. INTRODUCTION
The discovery of the transistor has clearly had enormous impact, both intellectually and commercially, upon our lives and work. A major vein in the corpus of condensed matter physics, quite literally, owes its existence to this break through. It also led to the microminiaturization of electronics, which has permitted us to have powerful computers on our desktops that communicate easily with each other via the Internet. The resulting globalization of science, technology and culture is now transforming the ways we think and interact.
Over the past 30 years, silicon technology has been dominated by Moore’s law: the density of transistors on a silicon integrated circuit doubles about every 18 months. The same technology that allows us to shrink the sizes of devices. To continue the increasing levels of integration beyond the limits mentioned above, new approaches and architectures are required .In today’s digital integrated circuit architectures, transistors serve as circuit switches to charge and discharge capacitors to the required logic voltage levels. It is also possible to encode logic states by the positions of individual electrons (in quantum dot single-electron transistors, for example) rather than by voltages. Such structures are scaleable to molecular levels, and the performance of the device improves as the size decreases. Artificially structured single electron transistors studied to date operate only at low temperature, but molecular or atomic sized single electron transistors could function at room temperature.
Before we turn to the single atom transistors, the subject of this article, we need to learn about the Kondo effect.
The Kondo effect
The effect arises from the interactions between a single-magnetic atom, such as cobalt, and the many electrons in an otherwise nonmagnetic metal such as copper. Such as an impurity typically has an intrinsic angular momentum or spin that interacts with all the surrounding electrons in the metal. As a result, the mathematical description of the system is a difficult many-body problem.
The electrical resistance of a pure metal usually drops as its temperature is lowered, because electrons can travel through a metallic crystal more easily when the vibrations of the atoms are small. However, the resistance saturates as the temperature is lowered below about 10k due to the presence of crystal lattice defects in the material, such as vacancies, interstitial, dislocations and grain boundaries. Electrical resistance is related to the amount of back scattering from defects, which hinders the motion of the electrons through the crystal. This text book resistive behavior of metal changes dramatically when magnetic atoms, such as cobalt, are added. The electrical resistance increases as the temperature is lowered further, in contrast to that of a pure metal. This effect was first observed in the 1930s.
This behaviour does not involve any phase transition, such as a metal-insulator transition. A parameter called the Kondo temperature (roughly speaking the temperature at which the resistance starts to increase again) completely determines the low-temperature electronic properties of the material. Considering the scattering from a magnetic ion that interacts with the spins of the conducting electrons. It was found that the second term in the calculation could be much larger than the first. The result is that the resistance of a metal increases logarithmically when the temperature is lowered. Hence the name ‘Kondo effect’. However, it also makes the unphysical prediction that the resistance will be infinite at even lower temperatures. It turns out that Kondo’s result is correct only above a certain temperature, which became known as the Kondo temperature, Tk. The impurity has only one electron with energy E. In this case, the electron can quantum-mechanically tunnel from the impurity and escape, if E is greater than Fermi level of the metal. Otherwise it remains trapped. The defect has a spin of ½ and its z-component is fixed as either ‘spin up’ or ‘spin down’. However, the so-called exchange process can take place that effectively flip the spin of the impurity from spin up to spin down or vice-versa, while simultaneously creating a spin excitation in the Fermi sea. When an electron is taken from the magnetic impurity in an unoccupied energy state at the surface of the Fermi Sea. The energy needed for this process is large, between 1 and 10eV, for the magnetic impurities. Classically, it is forbidden to take an electron from the defect without putting energy into the system. In quantum mechanics, however, the Heisenberg uncertainty principle allows such a configuration to exist for a very short time-around h/E, where h is the Planck constant. Within this time scale, another electron must tunnel from the Fermi Sea back to the impurity. However, since the uncertainty principle says nothing about the spin of this electron, its z-component may point in the opposite direction. In other words, the initial and final states of the impurity can have different spins. This spin exchange qualitatively changes the energy spectrum of the system. When many such processes are taken together, one finds that a new state-known as the Kondo resonance- is generated with exactly the same energy as the Fermi level.
Such a resonance is effective at scattering electrons with energies close to the Fermi level. Since the same electrons are responsible for the low-temperature conductivity of a metal, the strong scattering from this state increases the resistance. The Kondo resonance is unusual.
In contrast, the Kondo State is generated by exchange processes between a localized electron and free electron states. Since many electrons need to be involved, the Kondo effect is many body phenomenons. It is important to note that the Kondo State is always “on resonance” since it is fixed to the Fermi energy. Even though the system may start with energy E that is very far away from the Fermi energy, the Kondo effect alters the energy of the system so that it is always on resonance. The only requirement for the effect to occur is that the metal is cooled to sufficiently low temperatures below the Kondo temperature TK.
Enter nanotechnology
Nanotechnology aims to manipulate materials at the atomic scale. An important tool in the field is the scanning tunneling microscope (STM), which can image a surface with atomic resolution, move individual atoms across a surface and measure the energy spectrum at particular locations. Recently, the STM has been used to image and manipulate magnetic impurities on the surface of metals, opening a new avenue of research in to the Kondo effect. Quantum dots are small structures that behave like artificial atoms. Quantum dots are often called artificial atoms since their electronic properties resemble those of real atoms. A voltage applied to one of the gate electrodes of the device controls the number of electrons, N, that are confined in the dot. If an odd number of electrons is trapped within the dot is necessarily non zero and has a minimum value of S=1/2. This localized spin, embedded between large electron seas in the two leads, mimics the cobalt -in-copper system and many of the known Kondo phenomena can be expected to occur in these transistor-type devices.
One of main distinctions between a quantum dot and a real metal is related to their different geometries. In a metal, the electron states are plane waves, and scattering from impurities in the metal mixes electron waves with different momenta. This momentum transfer increases the resistance. In a quantum dot, however, all the electrons have to travel through the device, as there is no electrical path around it. In this case, the Kondo resonance makes it easier for states belonging to the two opposite electrodes to mix. This mixing increases the conductance (i.e. decreases the resistance). The advantage of quantum dots is the ease with which the parameters of these artificial atoms can be controlled. The conductance of a quantum dot depends only on T/Tk. The Kondo effect disappears when the number of electrons on the quantum dot is even. Moreover, at the lowest temperatures, the conductance approaches the quantum limit of conductance 2e2/h, where e is the charge of an electron. The Kondo cloud consists of electrons that have previously interacted with the same magnetic impurity. Since each of these electrons contains information about the same impurity, they effectively have information about each other. In other words, the electrons are mutually correlated.
Towards single-electron devices
Unlike field-effect transistors, single-electron devices are based on an intrinsically quantum phenomenon: the tunnel effect. This is observed when two metallic electrodes are separated by an insulating barrier about 1 nm thick - in other words, just 10 atoms in a row. Electrons at the Fermi energy can "tunnel" through the insulator, even though in classical terms their energy would be too low to overcome the potential barrier.
The electrical behaviour of the tunnel junction depends on how effectively the barrier transmits electron waves, which decreases exponentially with its thickness, and on the number of electron-wave modes that impinge on the barrier, which is given by the area of the tunnel junction divided by the square of the electron wavelength. A single-electron transistor exploits the fact that the transfer of charge through the barrier becomes quantized when the junction is made sufficiently resistive.
1.AN ELECTRON IN A BOX.
This quantization process is shown particularly clearly in a simple system known as a single-electron box (figure1). If a voltage source charges a capacitor, Cg, through an ordinary resistor, the charge on the capacitor is strictly proportional to the voltage and shows no sign of charge quantization. But if the resistance is replaced by a tunnel junction, the metallic area between the capacitor plate and one side of the junction forms a conducting "island" surrounded by insulating materials. In this case the transfer of charge onto the island becomes quantized as the voltage increases, leading to the so-called Coulomb staircase.
This Coulomb staircase is only seen under certain conditions. Firstly, the energy of the electrons due to thermal fluctuations must be significantly smaller than the Coulomb energy, which is the energy needed to transfer a single electron onto the island when the applied voltage is zero. This Coulomb energy is given by e2/2C, where e is the charge of an electron and C is the total capacitance of the gate capacitor, Cg, and the tunnel junctions. Secondly, the tunnel effect itself should be weak enough to prevent the charge of the tunneling electrons from becoming delocalized over the two electrodes of the junction, as happens in chemical bonds. This means that the conductance of the tunnel junction should be much less than the quantum of conductance, 2e2/h, where h is Planck's constant.
When both these conditions are met, the steps observed in the charge are somewhat analogous to the quantization of charge on oil droplets observed by Millikan in 1911. In a single-electron box, however, the charge on the island is not random but is controlled by the applied voltage. As the temperature or the conductance of the barrier is increased, the steps become rounded and eventually merge into the straight line typical of an ordinary resistor.
A single-electron transistor
2.PRINCIPLE OF SET.
The SET transistor can be viewed as an electron box that has two separate junctions for the entrance and exit of single electrons (figure 2). It can also be viewed as a field-effect transistor in which the channel is replaced by two tunnel junctions forming a metallic island. The voltage applied to the gate electrode affects the amount of energy needed to change the number of electrons on the island.
The SET transistor comes in two versions that have been nicknamed "metallic" and "semiconducting". These names are slightly misleading, however, since the principle of both devices is based on the use of insulating tunnel barriers to separate conducting electrodes.
In the original metallic version, a metallic material such as a thin aluminium film is used to make all of the electrodes. The metal is first evaporated through a shadow mask to form the source, drain and gate electrodes. The tunnel junctions are then formed by introducing oxygen into the chamber so that the metal becomes coated by a thin layer of its natural oxide. Finally, a second layer of the metal - shifted from the first by rotating the sample - is evaporated to form the island.
In the semiconducting versions, the source, drain and island are usually obtained by "cutting" regions in a two-dimensional electron gas formed at the interface between two layers of semiconductors such as gallium aluminium arsenide and gallium arsenide. In this case the conducting regions are defined by metallic electrodes patterned on the top semiconducting layer. Negative voltages applied to these electrodes deplete the electron gas just beneath them, and the depleted regions can be made sufficiently narrow to allow tunneling between the source, island and drain. Moreover, the electrode that shapes the island can be used as the gate electrode.
Operation of a SET transistor
So how does a SET transistor work? The key point is that charge passes through the island in quantized units. For an electron to hop onto the island, its energy must equal the Coulomb energy e2/2C. When both the gate and bias voltages are zero, electrons do not have enough energy to enter the island and current does not flow. As the bias voltage between the source and drain is increased, an electron can pass through the island when the energy in the system reaches the Coulomb energy. This effect is known as the Coulomb blockade, and the critical voltage needed to transfer an electron onto the island, equal to e/C, is called the Coulomb gap voltage.
Now imagine that the bias voltage is kept below the Coulomb gap voltage. If the gate voltage is increased, the energy of the initial system (with no electrons on the island) gradually increases, while the energy of the system with one excess electron on the island gradually decreases. At the gate voltage corresponding to the point of maximum slope on the Coulomb staircase, both of these configurations equally qualify as the lowest energy states of the system. This lifts the Coulomb blockade, allowing electrons to tunnel into and out of the island.
The Coulomb blockade is lifted when the gate capacitance is charged with exactly minus half an electron, which is not as surprising as it may seem. The island is surrounded by insulators, which means that the charge on it must be quantized in units of e, but the gate is a metallic electrode connected to a plentiful supply of electrons. The charge on the gate capacitor merely represents a displacement of electrons relative to a background of positive ions.
. 3 COUNTING ELECTRONS WITH SET.
If we further increase the gate voltage so that the gate capacitor becomes charged with -e, the island again has only one stable configuration separated from the next-lowest-energy states by the Coulomb energy. The Coulomb blockade is set up again, but the island now contains a single excess electron. The conductance of the SET transistor therefore oscillates between minima for gate charges that are integer multiples of e, and maxima for half-integer multiples of e (figure 3).
Accurate measures of charge
Such a rapid variation in conductance makes the single-electron transistor an ideal device for high-precision electrometry. In this type of application the SET has two gate electrodes, and the bias voltage is kept close to the Coulomb blockade voltage to enhance the sensitivity of the current to changes in the gate voltage.
The voltage of the first gate is initially tuned to a point where the variation in current reaches a maximum. By adjusting the gate voltage around this point, the device can measure the charge of a capacitor-like system connected to the second gate electrode. A fraction of this measured charge is shared by the second gate capacitor, and a variation in charge of ¼e is enough to change the current by about half the maximum current that can flow through the transistor at the Coulomb blockade voltage. The variation in current can be as large as 10 billion electrons per second, which means that these devices can achieve a charge sensitivity that outperforms other instruments by several orders of magnitude.
The precision with which electrons can be counted is ultimately limited by the quantum delocalization of charge that occurs when the tunnel-junction conductance becomes comparable with the conductance quantum, 2e2/h. However, the current through a SET transistor increases with the conductance of the junctions, so it is important to understand how the single-electron effects and Coulomb blockade disappear when the tunnel conductance is increased beyond 2e2/h.
Towards room temperature
Until recently single-electron transistors had to be kept at temperatures of a few hundred millikelvin to maintain the thermal energy of the electrons below the Coulomb energy of the device. Most early devices had Coulomb energies of a few hundred microelectronvolts because they were fabricated using conventional electron-beam lithography, and the size and capacitance of the island were relatively large. For a SET transistor to work at room temperature the capacitance of the island must be less than 10-17 F and therefore its size must be smaller than 10 nm.
.Perspectives on the future
Researchers have long considered whether SET transistors could be used for digital electronics. Although the current varies periodically with gate voltage - in contrast to the threshold behaviour of the field-effect transistor - a SET could still form a compact and efficient memory device. However, even the latest SET transistors suffer from "offset charges", which means that the gate voltage needed to achieve maximum current varies randomly from device to device. Such fluctuations make it impossible to build complex circuits.
One way to overcome this problem might be to combine the island, two tunnel junctions and the gate capacitor that comprise a single-electron transistor in a single molecule - after all, the intrinsically quantum behaviour of a SET transistor should not be affected at the molecular scale. In principle, the reproducibility of such futuristic transistors would be determined by chemistry, and not by the accuracy of the fabrication process. Only one thing is certain: if the pace of miniaturization continues unabated, the quantum properties of electrons will become crucial in determining the design of electronic devices before the end of the next decade.
CONCLUSION
A common thread between Stone Age, medieval, industrial and molecular nanotechnology is the exponential curve. This ever-accelerating curve representing human knowledge, science and technology will be driven a new way by what will probably become the first crude, pre-assembler nanotech products.
By treating atoms as discrete, bit like objects, molecular manufacturing will bring a digital revolution to the production of material objects. Working at the resolution limit of matter, it will enable the ultimate in miniaturization and performance. Research programs in chemistry, molecular biology and scanning probe microscopy are laying the foundations for a technology of molecular machine systems.
The motion of electrons in a transistor has been described as a complex dance. Switching action in one property of a transistor that has been demonstrated. Bardeen, Brattain and Shockley were concerned about the amplification properties of transistors they had invented. It remains to see whether amplification can be achieved to any experimentally observable extent in such a single atom transistor.
The shotgun marriage of chemistry and engineering called “Nanotechnology” is ushering in the era of self-replicating machinery and self-assembling consumer goods made from raw atoms. Utilizing the well understood chemical properties of atoms & molecules, nanotechnology proposes the construction of novel molecular devices possessing extraordinary properties. The single electron transistor or SET is a new type of switching device that uses controlled electron tunneling to amplify current.
By using the “Electron beam lithography” and “Electromigration”, the research leads to the designing of a single atom transistor with the help of the meticulously synthesized semiconductor crystals called “quantum dots”, which embodies the electrons confined in a channel and resembles same in its properties as an real atom.
This paper presents a scenario on existing and ongoing studies on NANO ELECTRONICS with the theoretical methods relevant to their understanding. Most of the preceding discussion is premised upon the implicit assumption. That future quantum effect Nano Electronic Devices will be fabricated in Nano Metre scale using molecules. Conductance quantization in ballistic regime has been described under various conditions. The behaviour of “Coulomb Island” through which the electrons can only enter by tunneling through one of the insulators is presented.
At last, the SET presents that it is the different construction is which is based on helical logic, atomic scale motion of electrons in an applied rotating electric field. INTRODUCTION
The discovery of the transistor has clearly had enormous impact, both intellectually and commercially, upon our lives and work. A major vein in the corpus of condensed matter physics, quite literally, owes its existence to this break through. It also led to the microminiaturization of electronics, which has permitted us to have powerful computers on our desktops that communicate easily with each other via the Internet. The resulting globalization of science, technology and culture is now transforming the ways we think and interact.
Over the past 30 years, silicon technology has been dominated by Moore’s law: the density of transistors on a silicon integrated circuit doubles about every 18 months. The same technology that allows us to shrink the sizes of devices. To continue the increasing levels of integration beyond the limits mentioned above, new approaches and architectures are required .In today’s digital integrated circuit architectures, transistors serve as circuit switches to charge and discharge capacitors to the required logic voltage levels. It is also possible to encode logic states by the positions of individual electrons (in quantum dot single-electron transistors, for example) rather than by voltages. Such structures are scaleable to molecular levels, and the performance of the device improves as the size decreases. Artificially structured single electron transistors studied to date operate only at low temperature, but molecular or atomic sized single electron transistors could function at room temperature.
Before we turn to the single atom transistors, the subject of this article, we need to learn about the Kondo effect.
The Kondo effect
The effect arises from the interactions between a single-magnetic atom, such as cobalt, and the many electrons in an otherwise nonmagnetic metal such as copper. Such as an impurity typically has an intrinsic angular momentum or spin that interacts with all the surrounding electrons in the metal. As a result, the mathematical description of the system is a difficult many-body problem.
The electrical resistance of a pure metal usually drops as its temperature is lowered, because electrons can travel through a metallic crystal more easily when the vibrations of the atoms are small. However, the resistance saturates as the temperature is lowered below about 10k due to the presence of crystal lattice defects in the material, such as vacancies, interstitial, dislocations and grain boundaries. Electrical resistance is related to the amount of back scattering from defects, which hinders the motion of the electrons through the crystal. This text book resistive behavior of metal changes dramatically when magnetic atoms, such as cobalt, are added. The electrical resistance increases as the temperature is lowered further, in contrast to that of a pure metal. This effect was first observed in the 1930s.
This behaviour does not involve any phase transition, such as a metal-insulator transition. A parameter called the Kondo temperature (roughly speaking the temperature at which the resistance starts to increase again) completely determines the low-temperature electronic properties of the material. Considering the scattering from a magnetic ion that interacts with the spins of the conducting electrons. It was found that the second term in the calculation could be much larger than the first. The result is that the resistance of a metal increases logarithmically when the temperature is lowered. Hence the name ‘Kondo effect’. However, it also makes the unphysical prediction that the resistance will be infinite at even lower temperatures. It turns out that Kondo’s result is correct only above a certain temperature, which became known as the Kondo temperature, Tk. The impurity has only one electron with energy E. In this case, the electron can quantum-mechanically tunnel from the impurity and escape, if E is greater than Fermi level of the metal. Otherwise it remains trapped. The defect has a spin of ½ and its z-component is fixed as either ‘spin up’ or ‘spin down’. However, the so-called exchange process can take place that effectively flip the spin of the impurity from spin up to spin down or vice-versa, while simultaneously creating a spin excitation in the Fermi sea. When an electron is taken from the magnetic impurity in an unoccupied energy state at the surface of the Fermi Sea. The energy needed for this process is large, between 1 and 10eV, for the magnetic impurities. Classically, it is forbidden to take an electron from the defect without putting energy into the system. In quantum mechanics, however, the Heisenberg uncertainty principle allows such a configuration to exist for a very short time-around h/E, where h is the Planck constant. Within this time scale, another electron must tunnel from the Fermi Sea back to the impurity. However, since the uncertainty principle says nothing about the spin of this electron, its z-component may point in the opposite direction. In other words, the initial and final states of the impurity can have different spins. This spin exchange qualitatively changes the energy spectrum of the system. When many such processes are taken together, one finds that a new state-known as the Kondo resonance- is generated with exactly the same energy as the Fermi level.
Such a resonance is effective at scattering electrons with energies close to the Fermi level. Since the same electrons are responsible for the low-temperature conductivity of a metal, the strong scattering from this state increases the resistance. The Kondo resonance is unusual.
In contrast, the Kondo State is generated by exchange processes between a localized electron and free electron states. Since many electrons need to be involved, the Kondo effect is many body phenomenons. It is important to note that the Kondo State is always “on resonance” since it is fixed to the Fermi energy. Even though the system may start with energy E that is very far away from the Fermi energy, the Kondo effect alters the energy of the system so that it is always on resonance. The only requirement for the effect to occur is that the metal is cooled to sufficiently low temperatures below the Kondo temperature TK.
Enter nanotechnology
Nanotechnology aims to manipulate materials at the atomic scale. An important tool in the field is the scanning tunneling microscope (STM), which can image a surface with atomic resolution, move individual atoms across a surface and measure the energy spectrum at particular locations. Recently, the STM has been used to image and manipulate magnetic impurities on the surface of metals, opening a new avenue of research in to the Kondo effect. Quantum dots are small structures that behave like artificial atoms. Quantum dots are often called artificial atoms since their electronic properties resemble those of real atoms. A voltage applied to one of the gate electrodes of the device controls the number of electrons, N, that are confined in the dot. If an odd number of electrons is trapped within the dot is necessarily non zero and has a minimum value of S=1/2. This localized spin, embedded between large electron seas in the two leads, mimics the cobalt -in-copper system and many of the known Kondo phenomena can be expected to occur in these transistor-type devices.
One of main distinctions between a quantum dot and a real metal is related to their different geometries. In a metal, the electron states are plane waves, and scattering from impurities in the metal mixes electron waves with different momenta. This momentum transfer increases the resistance. In a quantum dot, however, all the electrons have to travel through the device, as there is no electrical path around it. In this case, the Kondo resonance makes it easier for states belonging to the two opposite electrodes to mix. This mixing increases the conductance (i.e. decreases the resistance). The advantage of quantum dots is the ease with which the parameters of these artificial atoms can be controlled. The conductance of a quantum dot depends only on T/Tk. The Kondo effect disappears when the number of electrons on the quantum dot is even. Moreover, at the lowest temperatures, the conductance approaches the quantum limit of conductance 2e2/h, where e is the charge of an electron. The Kondo cloud consists of electrons that have previously interacted with the same magnetic impurity. Since each of these electrons contains information about the same impurity, they effectively have information about each other. In other words, the electrons are mutually correlated.
Towards single-electron devices
Unlike field-effect transistors, single-electron devices are based on an intrinsically quantum phenomenon: the tunnel effect. This is observed when two metallic electrodes are separated by an insulating barrier about 1 nm thick - in other words, just 10 atoms in a row. Electrons at the Fermi energy can "tunnel" through the insulator, even though in classical terms their energy would be too low to overcome the potential barrier.
The electrical behaviour of the tunnel junction depends on how effectively the barrier transmits electron waves, which decreases exponentially with its thickness, and on the number of electron-wave modes that impinge on the barrier, which is given by the area of the tunnel junction divided by the square of the electron wavelength. A single-electron transistor exploits the fact that the transfer of charge through the barrier becomes quantized when the junction is made sufficiently resistive.
1.AN ELECTRON IN A BOX.
This quantization process is shown particularly clearly in a simple system known as a single-electron box (figure1). If a voltage source charges a capacitor, Cg, through an ordinary resistor, the charge on the capacitor is strictly proportional to the voltage and shows no sign of charge quantization. But if the resistance is replaced by a tunnel junction, the metallic area between the capacitor plate and one side of the junction forms a conducting "island" surrounded by insulating materials. In this case the transfer of charge onto the island becomes quantized as the voltage increases, leading to the so-called Coulomb staircase.
This Coulomb staircase is only seen under certain conditions. Firstly, the energy of the electrons due to thermal fluctuations must be significantly smaller than the Coulomb energy, which is the energy needed to transfer a single electron onto the island when the applied voltage is zero. This Coulomb energy is given by e2/2C, where e is the charge of an electron and C is the total capacitance of the gate capacitor, Cg, and the tunnel junctions. Secondly, the tunnel effect itself should be weak enough to prevent the charge of the tunneling electrons from becoming delocalized over the two electrodes of the junction, as happens in chemical bonds. This means that the conductance of the tunnel junction should be much less than the quantum of conductance, 2e2/h, where h is Planck's constant.
When both these conditions are met, the steps observed in the charge are somewhat analogous to the quantization of charge on oil droplets observed by Millikan in 1911. In a single-electron box, however, the charge on the island is not random but is controlled by the applied voltage. As the temperature or the conductance of the barrier is increased, the steps become rounded and eventually merge into the straight line typical of an ordinary resistor.
A single-electron transistor
2.PRINCIPLE OF SET.
The SET transistor can be viewed as an electron box that has two separate junctions for the entrance and exit of single electrons (figure 2). It can also be viewed as a field-effect transistor in which the channel is replaced by two tunnel junctions forming a metallic island. The voltage applied to the gate electrode affects the amount of energy needed to change the number of electrons on the island.
The SET transistor comes in two versions that have been nicknamed "metallic" and "semiconducting". These names are slightly misleading, however, since the principle of both devices is based on the use of insulating tunnel barriers to separate conducting electrodes.
In the original metallic version, a metallic material such as a thin aluminium film is used to make all of the electrodes. The metal is first evaporated through a shadow mask to form the source, drain and gate electrodes. The tunnel junctions are then formed by introducing oxygen into the chamber so that the metal becomes coated by a thin layer of its natural oxide. Finally, a second layer of the metal - shifted from the first by rotating the sample - is evaporated to form the island.
In the semiconducting versions, the source, drain and island are usually obtained by "cutting" regions in a two-dimensional electron gas formed at the interface between two layers of semiconductors such as gallium aluminium arsenide and gallium arsenide. In this case the conducting regions are defined by metallic electrodes patterned on the top semiconducting layer. Negative voltages applied to these electrodes deplete the electron gas just beneath them, and the depleted regions can be made sufficiently narrow to allow tunneling between the source, island and drain. Moreover, the electrode that shapes the island can be used as the gate electrode.
Operation of a SET transistor
So how does a SET transistor work? The key point is that charge passes through the island in quantized units. For an electron to hop onto the island, its energy must equal the Coulomb energy e2/2C. When both the gate and bias voltages are zero, electrons do not have enough energy to enter the island and current does not flow. As the bias voltage between the source and drain is increased, an electron can pass through the island when the energy in the system reaches the Coulomb energy. This effect is known as the Coulomb blockade, and the critical voltage needed to transfer an electron onto the island, equal to e/C, is called the Coulomb gap voltage.
Now imagine that the bias voltage is kept below the Coulomb gap voltage. If the gate voltage is increased, the energy of the initial system (with no electrons on the island) gradually increases, while the energy of the system with one excess electron on the island gradually decreases. At the gate voltage corresponding to the point of maximum slope on the Coulomb staircase, both of these configurations equally qualify as the lowest energy states of the system. This lifts the Coulomb blockade, allowing electrons to tunnel into and out of the island.
The Coulomb blockade is lifted when the gate capacitance is charged with exactly minus half an electron, which is not as surprising as it may seem. The island is surrounded by insulators, which means that the charge on it must be quantized in units of e, but the gate is a metallic electrode connected to a plentiful supply of electrons. The charge on the gate capacitor merely represents a displacement of electrons relative to a background of positive ions.
. 3 COUNTING ELECTRONS WITH SET.
If we further increase the gate voltage so that the gate capacitor becomes charged with -e, the island again has only one stable configuration separated from the next-lowest-energy states by the Coulomb energy. The Coulomb blockade is set up again, but the island now contains a single excess electron. The conductance of the SET transistor therefore oscillates between minima for gate charges that are integer multiples of e, and maxima for half-integer multiples of e (figure 3).
Accurate measures of charge
Such a rapid variation in conductance makes the single-electron transistor an ideal device for high-precision electrometry. In this type of application the SET has two gate electrodes, and the bias voltage is kept close to the Coulomb blockade voltage to enhance the sensitivity of the current to changes in the gate voltage.
The voltage of the first gate is initially tuned to a point where the variation in current reaches a maximum. By adjusting the gate voltage around this point, the device can measure the charge of a capacitor-like system connected to the second gate electrode. A fraction of this measured charge is shared by the second gate capacitor, and a variation in charge of ¼e is enough to change the current by about half the maximum current that can flow through the transistor at the Coulomb blockade voltage. The variation in current can be as large as 10 billion electrons per second, which means that these devices can achieve a charge sensitivity that outperforms other instruments by several orders of magnitude.
The precision with which electrons can be counted is ultimately limited by the quantum delocalization of charge that occurs when the tunnel-junction conductance becomes comparable with the conductance quantum, 2e2/h. However, the current through a SET transistor increases with the conductance of the junctions, so it is important to understand how the single-electron effects and Coulomb blockade disappear when the tunnel conductance is increased beyond 2e2/h.
Towards room temperature
Until recently single-electron transistors had to be kept at temperatures of a few hundred millikelvin to maintain the thermal energy of the electrons below the Coulomb energy of the device. Most early devices had Coulomb energies of a few hundred microelectronvolts because they were fabricated using conventional electron-beam lithography, and the size and capacitance of the island were relatively large. For a SET transistor to work at room temperature the capacitance of the island must be less than 10-17 F and therefore its size must be smaller than 10 nm.
.Perspectives on the future
Researchers have long considered whether SET transistors could be used for digital electronics. Although the current varies periodically with gate voltage - in contrast to the threshold behaviour of the field-effect transistor - a SET could still form a compact and efficient memory device. However, even the latest SET transistors suffer from "offset charges", which means that the gate voltage needed to achieve maximum current varies randomly from device to device. Such fluctuations make it impossible to build complex circuits.
One way to overcome this problem might be to combine the island, two tunnel junctions and the gate capacitor that comprise a single-electron transistor in a single molecule - after all, the intrinsically quantum behaviour of a SET transistor should not be affected at the molecular scale. In principle, the reproducibility of such futuristic transistors would be determined by chemistry, and not by the accuracy of the fabrication process. Only one thing is certain: if the pace of miniaturization continues unabated, the quantum properties of electrons will become crucial in determining the design of electronic devices before the end of the next decade.
CONCLUSION
A common thread between Stone Age, medieval, industrial and molecular nanotechnology is the exponential curve. This ever-accelerating curve representing human knowledge, science and technology will be driven a new way by what will probably become the first crude, pre-assembler nanotech products.
By treating atoms as discrete, bit like objects, molecular manufacturing will bring a digital revolution to the production of material objects. Working at the resolution limit of matter, it will enable the ultimate in miniaturization and performance. Research programs in chemistry, molecular biology and scanning probe microscopy are laying the foundations for a technology of molecular machine systems.
The motion of electrons in a transistor has been described as a complex dance. Switching action in one property of a transistor that has been demonstrated. Bardeen, Brattain and Shockley were concerned about the amplification properties of transistors they had invented. It remains to see whether amplification can be achieved to any experimentally observable extent in such a single atom transistor.
Subscribe to:
Posts (Atom)