Sunday, December 31, 2006

The chemical industry from the 1850s until today

Since the Industrial Revolution, we have had many examples of how new discoveries have translated into greater economic growth for the world. Before that era, there was no real concept of growth, except by conquest, plunder, or exploration. Most people understood that they were born with a particular status in life. It is no coincidence that Adam Smith's The Wealth of Nations appeared in 1776, at the beginning of the modern era, and we are still indebted to him for his insight into an economic system that could accommodate and build on this revolution. We must not lose sight of the miracle that growth rates since then have risen remarkably, even though the world's population was growing rapidly. Increasingly, jobs have been found for this growing population at an amazing rate, albeit unevenly in different countries.

Although the early inventions of the Industrial Revolution were not scientific but largely empirical technology, later on, and especially in the twentieth century, science has been essential to modern technologically based industry, and thus crucial to the great economic growth described earlier. We decided some years ago to illuminate this question by studying the chemical industry, the first science-based industry. As a science-based industry, it began in 1856 in the United Kingdom with the discovery of synthetic mauve by William Henry Perkin, but it had earlier roots in science, which we explore briefly.

An account of this remarkable development is contained in our new book published in 1998 by John Wiley and Sons, in conjunction with the Chemical Heritage Foundation, entitled Chemicals and Long-Term Economic Growth, edited by Professor Nathan Rosenberg of Stanford, and us. We are indebted to that study for some of these remarks.

The science of chemistry originated in the eighteenth century. The first truly scientific chemist was Antoine Lavoisier, who systematically measured and studied chemical reactions. Unfortunately, he was guillotined by that other Revolution in 1794, not because of his chemical knowledge, but because he was a tax collector for the Ancien Regime in order to earn a living. It was not the only time politics and science have collided-think of Galileo and the Inquisition, and Lysenko in the Soviet Union. In fact, history records that Robespierre and Hitler, at different times and in different circumstances, both said in effect that their revolution had no need of scientists!

Where is the chemical industry going

The business of chemistry (a.k.a. the chemical industry) is one of the oldest U.S. industries. It is a dynamic, forward-looking high-tech industry, a keystone of our economy and a leader in protecting the environment. Although the public does not always appreciate the many benefits arising from chemistry, its products greatly enhance our quality of life. Included are more than 70,000 products that enable rising U.S. productivity and living standards. Synthetic fibers and permanent-press clothing, life-saying medicines, health improvement products, more protective packaging materials, longer lasting paints, stronger adhesives, more durable and safer tires, lightweight automobile parts, and stronger composite materials in aircraft and spacecraft are only a few of the thousands of the innovative products of the business of chemistry. It is also an enabling and transforming business. U.S. companies engaged in the business of chemistry have remained internationally competitive, constantly creating new processes and products to solve performance, safety and efficiency problems in a number of industries and arenas.

The U.S. chemical industry is the world's largest, accounting for 27 percent of the total world chemical production. Moreover, in spite of wars and depression, the United States has maintained this number-one position since the 1910s. The chemical industry is the largest exporting sector in the United States, with exports totaling $68 billion in 1998. Balanced against imports of $55 billion, this provided a trade surplus of over $13 billion, continuing a more than seventy-year uninterrupted history of trade surpluses.

Defining the Business

The business of chemistry is not easily captured by either the Standard Industrial Classification (SIC) system nor the North American Industrial Classification System (NAICS). The definitional foundations of both are based on the concept of related production activities. In contrast, the business of chemistry is largely market driven (or focused). Participants in the industry have never viewed themselves along the lines of economic nomenclature but rather within four main segments - basic chemicals, specialty chemicals, life sciences, and consumer products - each with its own characteristics, growth dynamics, markets, developments and issues. The boundaries between each are not clearly defined and some degree of overlapping exists. Table 1 describes key characteristics or parameters.

BASF touts GM spuds for starch

German chemical giant BASF is awaiting EU approval for commercial cultivation of one of its latest breakthroughs, an inedible potato called Amflora. Unappealing as it sounds, the GM spud will make a key contribution to renewable resources across Europe, says Thorsten Storck, global project manager at BASF Plant Science. The crop has been modified to produce a type of starch particularly suited to paper production.

The company claims that Amflora starch will have economic and environmental advantages over standard potato starch, which contains a mixture of 80 per cent amylopectin and 20 per cent amylose. Both these compounds are glucose polymers, but they have very different physicochemical characteristics.

Amylopectin has an extensively branched structure, and is a thickening agent. Amylose has a more linear, unbranched structure, and is a gelling agent. Gelation interferes with several industrial applications of starch, leaving producers with a lumpy mess. To reduce the tendency of amylose to gel, starch has to be chemically modified before use – a process involving both energy and water.

BASF researchers say they have the solution. They have modified one of the potato’s genes for starch synthesis (which encodes an enzyme called granule-bound starch synthase, or GBSS) and report that the resulting potato produces 100 per cent amylopectin.

There is no method to synthesise starch commercially, so starch is derived principally from corn, but also from potatoes and cassava. There has been a type of corn that produces only amylopectin for over 100 years, but corn is less well suited to the northern European climate and is often imported.

Persuading potatoes to make amylopectin-rich starch has proved more of a challenge. Potatoes have the advantage not only of being suitable for cultivation in northern Europe, but also of producing starch that is better suited to certain industrial processes used to make paper, textiles, adhesives and packaging.

A potato that produces only amylopectin was developed in the Netherlands over ten years ago, and several other companies are working hard to release their own GM varieties. While BASF cannot claim Amflora as a great scientific first, say experts working in the field, they are to be congratulated for getting their potato this close to approval for cultivation in the European Union. The weight behind arguably the largest chemical company in the world has played an essential role.

‘This is a very interesting step in the development of renewable industrial materials,’ said Alison Smith, head of the department of metabolic biology at the John Innes Centre, Norwich, UK.

The company hopes to gain EU approval following discussions in December, and plans to begin cultivation next year. This will be the first time in eight years that the EU has debated the approval of a live genetically modified organism (GMO).

The fact that this is a non-food crop may work in its favour, and will be an important test case, being the first GMO to go up before regulatory authorities since a de facto EU moratorium on biotech approvals was lifted in 2004.

EU chemicals legislation settled

European Union negotiators announced on 1 December that they had overcome the final hurdles that were holding up new legislation to control the use of chemicals.

The breakthrough in late-night discussions between legislators and ministerial officials should clear the way for an EU-wide regulatory regime, known as Reach, to come into force in 2007.

If the latest compromises are approved, as expected, by the European Parliament in a vote scheduled for 13 December, EU ministers can sign the draft into law within weeks. A go-ahead should follow from heads of government at an EU summit on 14-15 December.

Making a substitution

A crucial part of the Reach (registration, evaluation and authorisation of chemicals) compromise depends on the ‘substitution principle’ – the extent to which the chemicals industry and downstream users should be pressured to discontinue use of hazardous substances and switch to safer alternatives.

In effect, the deal broadly accepts that Europe’s £265 billion chemicals industry can continue to use between 1500 and 2000 ‘substances of high concern’, provided that processes are ‘adequately controlled’. Many members of the European Parliament (MEPs) had wanted substitution to be mandatory, but this was opposed by the Netherlands, Poland, and particularly Germany, where a powerful lobbying exercise was led by the chemicals giant BASF.

The revised Reach regulation now requires producers and importers to present an analysis of possible alternatives to hazardous substances. If alternatives are available, the applicants must produce ‘substitution plans’ showing how they intend to phase out the hazardous substance. If alternatives are not available, the applicants must present an R&D programme aimed at finding alternatives.

The European Environmental Bureau – an alliance of NGOs including Greenpeace and Friends of the Earth – attacked the deal, on the grounds that substitution plans will only be submitted when the applicant company itself identifies a safer alternative. This offers ‘an incentive for chemicals companies to continue ignoring safer alternatives,’ according to an EEB statement.

But Dutch MEP Ria Oomen-Ruijten, lead negotiator for the European People’s Party, rejected claims that substitution plans will be unenforceable, and said that the compromise will mean ‘a further reduction of useless testing’, while the sharing of data on animal testing will help avoid unnecessary duplication.

Fight to the Finnish

The deal comes at the end of Finland’s six-month tenure of the rotating presidency of the Union. As well as winning plaudits for brokering the deal on Reach, Finland will also gain the kudos – and economic benefits – of hosting the European Chemicals Agency (ECHA) in Helsinki. ECHA will handle registration applications and safety data for around 30 000 widely used substances as Reach is phased in over the next 11 years.

Pending final confirmation, it appears that around 17 000 existing substances produced in small quantities will be either exempted or subject to less stringent safety data requirements.

The compromises go a long way toward meeting chemicals industry concerns, while environmental campaigners called it a ‘sell-out to the intense lobbying of the German chemicals industry’ and ‘a sad day for environmental policy’.

Reach will replace around 40 directives enacted over the past three decades. One of its key goals is to offer greater transparency to the general public, providing easier access to information about chemicals.

Currently, panels of experts advise the European Commission – the EU’s executive arm – on whether certain substances should be restricted. But only 3000 substances have been vetted under current procedures, accounting for less than one per cent by volume of chemicals used in industry.

Data deals

Industry had also been worried about how commercially-sensitive data, necessary for some applications, would be handled by ECHA. A recommendation that the data should remain confidential for six years, rather than three, was hailed as ‘good news for manufacturers’ by Oomen-Ruijten. ‘This will prevent years of expensive research becoming worthless,’ she said.

One final bone of contention was whether Reach should include a token declaration that producers and importers have a general duty of care to prevent harm – an issue that had raised legal concerns because of differences in liability laws in EU member states.

The principle will now be included in the declaratory preamble to the regulation, rather than in the body of the legislation itself, which may cause problems for interpretation and enforcement in the future.

Global influence

Reach is one of the largest EU legal texts ever drafted, running to more than 1000 pages, and its influence will be felt far beyond Europe. As major exporters to the EU, American chemicals firms have taken a close interest in Reach’s progress, not only because of the compliance issues involved but also because Reach rules may influence federal policy at home, or spark transatlantic disputes at the World Trade Organisation.

As British MEP Chris Davies, negotiator for the parliament’s Liberal group, claimed: ‘Europe's model for the control of chemicals is set to become the standard for the world.’

Renowned cancer scientist was paid by chemical firm for 20 years

A world-famous British scientist failed to disclose that he held a paid consultancy with a chemical company for more than 20 years while investigating cancer risks in the industry, the Guardian can reveal.

Sir Richard Doll, the celebrated epidemiologist who established that smoking causes lung cancer, was receiving a consultancy fee of $1,500 a day in the mid-1980s from Monsanto, then a major chemical company and now better known for its GM crops business.

While he was being paid by Monsanto, Sir Richard wrote to a royal Australian commission investigating the potential cancer-causing properties of Agent Orange, made by Monsanto and used by the US in the Vietnam war. Sir Richard said there was no evidence that the chemical caused cancer.

Documents seen by the Guardian reveal that Sir Richard was also paid a £15,000 fee by the Chemical Manufacturers Association and two other major companies, Dow Chemicals and ICI, for a review that largely cleared vinyl chloride, used in plastics, of any link with cancers apart from liver cancer - a conclusion with which the World Health Organisation disagrees. Sir Richard's review was used by the manufacturers' trade association to defend the chemical for more than a decade.

The revelations will dismay scientists and other admirers of Sir Richard's pioneering work and fuel a rift between the majority who support his view that the evidence shows cancer is a product of modern lifestyles and those environmentalists who argue that chemicals and pollution must be to blame for soaring cancer rates.

Yesterday Sir Richard Peto, the Oxford-based epidemiologist who worked closely with him, said the allegations came from those who wanted to damage Sir Richard's reputation for their own reasons. Sir Richard had always been open about his links with industry and gave all his fees to Green College, Oxford, the postgraduate institution he founded, he said.

Professor John Toy, medical director of Cancer Research UK, which funded much of Sir Richard's work, said times had changed and the accusations must be put into context. "Richard Doll's lifelong service to public health has saved millions of lives. His pioneering work demonstrated the link between smoking and lung cancer and paved the way towards current efforts to reduce tobacco's death toll," he said. "In the days he was publishing it was not automatic for potential conflicts of interest to be declared in scientific papers."

But a Swedish professor who believes that some of Sir Richard's work has led to the underestimation of the role of chemicals in causing cancers said that transparency was all-important. "It's OK for any scientist to be a consultant to anybody, but then this should be reported in the papers that you publish," said Lennart Hardell of University Hospital, Orebro.

Sir Richard died last year. Among his papers in the Wellcome Foundation library archive is a contract he signed with Monsanto. Dated April 29 1986, it extends for a year the consulting agreement that began on May 10 1979 and offers improved terms. "During the one-year period of this extension your consulting fee shall be $1,500 per day," it says.

Monsanto said yesterday it did not know how much work Sir Richard did for the company, but said he was an expert witness for Solutia, a chemical business spun off from Monsanto, as recently as 2000.

Chemical Earnings Decline Slightly

While the fourth quarter of 2005 proved difficult for U.S. chemical companies, with earnings rising just slightly above year-earlier results, the first quarter of 2006 was even worse: C&EN's group of 25 firms posted an aggregate earnings decline—the first the industry has seen since the third quarter of 2003.

When all of the results were in, the companies had a total earnings decline of 2.7% to $4.0 billion. Sales were bolstered largely by price increases to a total of $49.6 billion, a 5.1% increase from the comparable period in 2005. Earnings are from continuing operations, excluding extraordinary and nonrecurring items.

As a result, profitability suffered. The aggregate profit margin for the combined companies fell to 8.1% from 8.7% in the same period a year earlier. This outcome, however, was a big improvement from last year's fourth quarter, when the total profit margin, battered by the aftermath of the Gulf Coast hurricanes, was just 6.1%.

Disasters aside, there was a weakness in many industry fundamentals in the first quarter. According to the Federal Reserve Board, total U.S. chemical production declined 1.5% from the same period a year earlier. Within that, the important basic chemicals sector saw output drop a whopping 7.8%.

Prices, though, continued to increase, rising 8.9% for all chemicals, according to Labor Department data. Meanwhile, the average producer price index for basic chemicals in the quarter jumped 14.4%.

Despite the decline in production in the quarter, the increase in prices raised the dollar value of chemical shipments, or what companies actually sell to customers. The Department of Commerce reports that shipments of all chemicals in the first quarter rose 6.2% to $142.6 billion. Shipments of chemicals excluding pharmaceuticals, a product basket that is more in line with the mix of products at the 25 companies, rose 7.7% to $107.7 billion.

Although chemical earnings were down overall, the majority of firms had earnings increases in the quarter. Seventeen of the companies saw earnings rise, with two, Celanese and H.B. Fuller, showing triple-digit growth. The remaining eight had lower earnings than in the first quarter of 2005. This includes Terra Industries, where earnings went from $4.4 million in first-quarter 2005 to a loss of $25.3 million in the first three months of this year.

The two largest companies on the list, Dow Chemical and DuPont, were among the firms with lower earnings. Dow's earnings fell 10.6% to $1.21 billion on a 2.9% rise in sales to $12.0 billion. Earnings at DuPont declined 10.3% to $867 million, while sales were down by 0.5% to $7.39 billion.

Nevertheless, profitability remained high at both companies. Dow posted a profit margin of 10.1% compared with 11.6% in the comparable 2005 quarter, while DuPont's profit margin of 11.7% was down from 13.0% in the year-earlier period.

Both companies put a brave face on the declines. At Dow, Chief Financial Officer Geoffrey E. Merszei, says: "This was a quarter in which we again benefited from our strategy, with an increase in [earnings before income taxes] of our combined performance businesses mitigating a decline in our basics portfolio. And while turnarounds during the quarter impacted volume, and U.S. sales slowed at the start of the year on the expectation of lower prices, we saw demand pick up again in March, and that momentum has continued into the second quarter."

DuPont Chief Executive Officer Charles O. Holliday Jr. says: "We knew it would be a difficult operating environment in the first quarter, and I am very encouraged by the better-than-expected performance of our company. We are fully committed to growing revenue, controlling costs, and improving returns on assets across all of our businesses." Additionally, Holliday says, "We are raising our full-year earnings outlook in light of our first-quarter performance and the progress we have made in successfully implementing initiatives to accelerate shareholder value."

The largest measurable percentage decline in earnings for the quarter—59.1%, compared with the year—earlier period-came from Huntsman Corp. CEO Peter R. Huntsman chose to make a comparison with the previous quarter. "Our results in first-quarter 2006 show marked improvement as compared to the hurricane-impacted results achieved in the fourth quarter of 2005," he says, noting that sales volumes improved across most product lines as end- market demand continues to grow in North America, Europe, and Asia.

But all is not perfect. "As we have indicated in the past, we are frustrated with the valuations that the market appears to place on our differentiated businesses," Huntsman notes. "We continue to evaluate available options for improving shareholder value, and as previously announced, we are aggressively pursuing the sale of certain of our base chemicals and polymers assets and/or a spin-off of these segments."

Eastman Chemical, which had 35% earnings growth in final quarter of 2005, saw its fortunes turn in the first quarter of this year as earnings fell 27.7% from year-earlier levels to $112 million. The decline came on a 2.3% increase in sales to $1.80 billion. The decline was due primarily to lower polymers segment earnings, which fell to $17.0 million from $84.0 million in the comparable 2005 quarter. The company's raw material and energy costs increased by about $100 million compared with those in first-quarter 2005, and results also were affected by about $19 million in costs associated with operational disruptions at Eastman's Longview, Texas, facility.

Among companies with gains for the quarter, Celanese, a company new to the C&EN list, took the prize. Earnings at the firm more than tripled, rising 234.2% to $127 million as sales increased 11.8% to $1.65 billion. Profitability at the company jumped to 7.7% from 2.6%. "Our results demonstrate the strength of our integrated hybrid structure as our downstream businesses delivered improved performance year-over-year," CEO David N. Weidman says. "We are focusing our portfolio, expanding globally, and relentlessly pursuing cost improvements to deliver on our commitments and create value for our shareholders."

Also new to the list of 25 firms is Lyondell Chemical, which enters the group as the third-largest company. Up until now, year-to-year comparability had been problematic when analyzing the firm's data.

Lyondell had 14.2% earnings growth to $290 million, double its 7.1% sales increase to $4.76 billion. The company's profit margin increased to 6.1% from 5.7%. Of the future, CEO Dan F. Smith says, "Volatility and current high prices in the energy markets continue to present challenges, but strong business conditions ultimately should prevail, positioning Lyondell's chemical products for another strong year."

Celanese and Lyondell have replaced Ferro and PolyOne in C&EN's survey of chemical company earnings.

Other companies are predicting a good year on the basis of their first-quarter results. One of them is FMC Corp., which had a 38.0% increase in earnings to $69 million as sales rose 7.5% to $594 million. "With our strong first-quarter performance, we have raised our full-year 2006 outlook for earnings, before restructuring and other income and charges, to $5.35 to $5.55 per diluted share," CEO William G. Walter says. Diluted earnings per share is the value reached if all convertible securities were converted or all warrants or stock options were exercised. "Through the balance of the year, we expect to realize the ongoing benefits of higher selling prices in industrial chemicals, lower interest expense, and continued profitable growth in agricultural products and specialty chemicals, though unfavorable currency translation and higher energy and raw material costs are expected to persist," he adds.

For all of 2006, Praxair expects to see continued year-over-year growth of about 10% and diluted earnings per share in the range of $2.74 to $2.82, representing 13-17% growth over 2005. CEO Dennis H. Reilley says, "We expect strong growth in 2006 and 2007 as projects in our backlog come onstream and new applications technologies take hold." In the first quarter, Praxair's earnings rose 19.1% to $225 million. Sales improved 10.9% to $2.03 billion.

At Rohm and Haas, CEO Raj L. Gupta expects 3-4% demand growth due to higher sales in key markets such as electronic materials and coatings, even though the outlook for global demand and input costs remains uncertain. At the same time, the company is focused on improving sales mix, managing selling prices, maintaining growth in emerging markets, and improving manufacturing efficiency. "As a result," Gupta says, "we expect full-year sales growth in the 3-5% range, yielding annual sales of approximately $8.3 billion and full-year earnings in the $3.15- to $3.30-per-share range."

The APSA Process In Nitrogen Generataors

Some of the new-generation nitrogen generators use the APSA process to generate nitrogen. This APSA process relies on the fractionated distillation of air at very low (cryogenic) temperatures, and in only one column. In other words, APSA nitrogen generators are nitrogen generators that use cryogenic distillation of air to generate nitrogen.

After the air is being compressed, it is purified in the nitrogen generator, so that the cryogenic operation runs smoothly. The air is being compressed at around 9 bars with a centrifugal or a screw compressor and afterwards cooled down with the help of a cooling unit.

The air that runs through the nitrogen generator must then be purified, so it passes through several filters and cooled down some more.

Afterwards the criogenic process must intervene, so the air enters a special area of the nitrogen generator, the cooling area, and then the oxygen in the air is separated from the nitrogen. At the bottom of the area there will be a liquid that is oxygen-rich and at the top the desired nitrogen. The low temperature inside the nitrogen generator is mantained using a small quantity of liquid nitrogen, which is then added at the produced nitrogen.

This process is designed so that it's all automatically controlled, it requires no manual procedures. If problems occur, the nitrogen generator is created so that it will try to solve them on its own. For example, if the nitrogen consumption increases, a pressure regulator will maintain the normal pressure.

Or, if the concentration of oxygen is too high, the APSA process is automatically closed and the excess of oxygen is ventilated outside. Furthermore, the nitrogen generator waits for the oxygen levels to decrease, and if they don't, the whole system is shut down. When this occurs, the nitrogen generator takes safety precautions.

Technology Leads To Reduction Of Nitrogen Generators Size

As technology improved, so did the nitrogen generator systems, and recent discoveries have led to the reducing of the nitrogen generators size.

These new-generation, small size nitrogen generators are very effective and reliable, and they operate automatically, with very little maintenance required. The main difference between these nitrogen generators and the normal ones is the size, these small capacity units only take up 60% of the space used by a usual nitrogen generator, saving 40%.

Another difference is that these nitrogen generators do not supply a 99.99..% pure nitrogen, but something around 95% pure, which is not a disadvantage because most users and laboratories don't require 99.99..% pure nitrogen. The nitrogen's purity may be increased to 99.5% if the user desires to do so, by absorbtion or, cheaper, by adding a process to the nitrogen generator, that runs the resulting gas through a special filter that reduces the oxygen concentreation from the resulting gas. Also, if the buyer requests, he will also receive vaporization systems and liquid nitrogen storage together with the nitrogen generator.

These units have been tested, and they have been found to meet al the requirements of a nitrogen generator, and they are the best and cheapest solution to many needs.

The pressure of the gas delivered by the small-size nitrogen generators can vary around 6-7 Bar(g), and it can be increased with the help of a compressor.

In conclusion, these nitrogen generators are the best solution if you wish to save space & money, and not only, they can be used by everybody because they require little maintenance, low power, they have a compact design and they can operate unattended and monitor themselves.

Compact nitrogen generators are available for purchasing to everyone on the internet maket.

New age Paint Thickening and Rheological Additives Solvitex and Solvizen

For Paint Manufacturers worldwide, there has come a radical new development in the legion of Thickening Agents & Colloidal stabilizer for Aqueous Latex Paints. So far most of the companies producing aqueous Paint emulsions, the most preferred thickening agent was Hydroxy ethyl cellulose (HEC). The HEC based thixotropic additives is produced by some of the renowned multi-national companies such as Hercules, Akzo Nobel & Dow.

Here we refer to a Biopolymer based on Polysacchride derived from controlled derivatisation process from plant origin. The product developed by Asian Trade Link Chemicals Division in India has already been a successful venture for replacement of HECs for many paint companies. The Product is known as Solvitex thickening agent & colloidal stabilizer.

The solvitex range of products is offered in wide range of viscosities of aqueous solutions. It comes in Powder form of Offwhite color and the thickening properties are similar to HEC.

But the Advantages of Solviex against HECs are many. The prime factor is the stability & economic pricing. Its been observed that for a Medium viscosity range of products such as Natrosol 250 HBR or Bermcoll 381 the intial drop after settling is by 20% and then the finished paint adheres to be stable for six months.and the viscosity further drops. Solvitex has an initial drop of settling to 10% and the product is stable in physical observation for more than 1year exhibiting the same viscosity.

As known by many Paint technicians, its inevitable that the stability of thickening agent after incorporating in the Paint system, is delicate towards wide temperature changes. Especially applicable for Export shipments when the Paint under goes through various temperature conditions during its transit. A sudden drop & rise in outside temperature can effect the stability. Solvitex has potentially proven to be stable under those conditions.

Amongst HEC, I personally find Cellosize of Dow as the most stable product compared to Bermocoll & Tylose.

Myself as Paint formulator & other colleagues working in different companies had a dilemma existing for selection. For cheaper Paints the Stronger product or High Viscosity grades were used such but there always was a flocculation after some period on shelf storage. For the Expensive ones we had to prefer Medium Viscosities grade to overcome shelf storage time. Most of the companies carry the same practice.

When I came across Solvitex range of Thickeners I was given much competitive prices. Priced at least 10% lower than HEC based Products, but my major concern was to check the colloidal stabilization. The Product was immediately tested and was found to perform excellent.

Apart from that, Solvitex has provided better color acceptance & and coatabilty.

Sooner we came across Solvizen developed by the same company for more economic & cheaper Price. The thickening & stability properties was the same but economically priced.

However, The Water solution clarity of Solvizen was not as clear as of Solvitex or HECs. It was slight turbid. But when incorporated in Paints, we couldn’t find any difference. Again the Paint was stable physically observed.

Asia Set to Dominate Potassium Chemicals Market

A major new Study from British Sulphur Consultants on 'The Market for Potassium Chemicals' concludes that growth in demand for caustic potash (potassium hydroxide, or KOH) should average 4% per annum to 2006, but that there will be regional differences in growth patterns which have important implications for producers and consumers alike

Asia Set to Dominate Potassium Chemicals Market A major new Study from British Sulphur Consultants on 'The Market for Potassium Chemicals' concludes that growth in demand for caustic potash (potassium hydroxide, or KOH) should average 4% per annum to 2006, but that there will be regional differences in growth patterns which have important implications for producers and consumers alike.

* KOH is made by the electrolysis of potassium chloride (KCl), in a similar way to the production of caustic soda (NaOH) from salt (NaCl). KOH is the main intermediate for the production of industrial potassium chemicals, and is used as such in many end-use applications. The new Study covers supply and demand for industrial potassium chloride, potassium hydroxide, and potassium carbonate. In all over 50 end-use applications are reviewed in the 317 pages of analysis and comment, covering the demand for, supply of, trade in, and costs of industrial potassium chemicals.

* The industrial potassium chemicals industry centres around potassium hydroxide. Around two thirds of all industrial KCl sold is converted into KOH. Global demand for KOH in 2000 was 1.36 million tonnes, which is forecast to grow to 1.71 million tonnes by the end of 2006. Around 92% of KOH consumption takes place in West Europe, North America, and Asia. The study found that growth in West Europe and North America will average 3% p.a. to 2006, whereas growth in Asia will average 6% p.a.

* The largest end use market for potassium hydroxide is for the production of potassium carbonate, which accounted for 34% of demand in 2000. All other end-use applications have a share of demand below 10%. For potassium carbonate the main end-use application is the production of cathode ray tube glass for television sets and computer monitors. Demand for CRT glass is growing at faster rates in Asia than the rest of the world as producers of TV's and computer monitors increasing move production to Asia to benefit from lower production fixed costs.

* The Study examines the key supply-demand issues facing producers of industrial KCl, KOH, and potassium carbonate (K2CO3). These include: The future of mercury cell electrolysis for KOH production in West Europe. · The potential for additional KOH capacity in Asia and North America · The impact of over-capacity in K2CO3 on price and margin in North America and West Europe. · The potential for rationalisation of the supply base for industrial potassium chemicals.

* The Study highlights the need for the restructuring of the industry during the next decade, with greater concentration of production resource in Asia, and changes in West Europe and North America if producers of potassium chemicals are to harness the growth potential in the industry to profitable manufacturing.

* The Market for Potassium Chemicals: Current Status and Future Prospects is available in printed format and CD-ROM from British Sulphur Consultants. For further information contact John Segal or Allan Pickett.

* British Sulphur Consultants is the chemical division of CRU International Ltd., the world's largest business research and consultancy provider in the inorganic chemicals, minerals, metals, metal fabrication, and wire and cable industries. Based in London, England, CRU also has offices in the USA and Singapore.

Friday, December 29, 2006

Working with Cleaning Chemicals

As cleaning companies we work with an assortment of cleaning chemicals. Your employees need to know how to use all chemicals they work with and be aware of where the MSDS sheets are for the products they use.

Following are safety tips that apply to the use of any product they may be using.

  • Never mix chemicals.
  • Measure all chemicals and mix according to the label directions. If the directions say one ounce of product to four gallons of water, keep those ratios even if you are mixing a smaller quantity. Too weak a solution may not provide the cleaning power you need; too strong a solution wastes chemical and may cause damage to surfaces or injuries to employees.
  • Never use chemicals that are in an unlabeled container. Employees should never sniff, taste or try to guess what is in the container.
  • Employees need to know what chemicals they are to use for each particular cleaning task and how to use that chemical properly.
  • Clean spills immediately. Before cleaning the spill, employees should put on protective goggles and latex or rubber gloves. If the spill is on a hard surface carefully mop up the spill. If the spill is on carpet absorb it with a white rag. Use another damp white rag to blot the spill. Report spills to a supervisor in case it needs to be further extracted from the carpet.
  • Eye protection, gloves and other protective clothing may be necessary when working with some cleaning chemicals. Employees need to know what protective gear to use when working with chemicals.
  • Using chemicals correctly helps make cleaning tasks go quicker. Make sure your employees follow all of your company's specific guidelines when working with chemicals.

Probiotics An Emerging Alternative

This article emphasizes the importance of probiotics in our daily life. Probiotics are live microorganisms, native of human gut, which can colonize in the alimentary tract and produce beneficial effects for the host. They offer an alternative and additional therapy for usual intestinal problems such as diarrhoea, irritable bowel syndrome etc. And acts as nutritional support. Lets learn more about the same in this article itself.


Now a days probiotics are emerging as alternatives for antibiotics and antivirais in the treatmentof many gastrointestinal diseases as these probiotics are devoid of side effects.Probiotics are live microorganisms, native of human gut, which can colonize in the alimentary tract and produce beneficial effects for the host. They offer an alternative and additional therapy for usual intestinal problems such as diarrhoea, irritable bowel syndrome etc. and acts as nutritional support. Acting through different mechanisms, probiotics help to restore the normal flora in the GIT. This article emphasizes the importance of probiotics in our daily life.

Probiotics are live microorganisms that when ingested, survive, passage through the GIT and results in beneficial effects for the host, including amelioration or prevention of a specific disease state.The organisms that protect and enhance our life are known as Probiotics as they play beneficial role in treatment of Travellers diarrhoea, antibiotic associated diarrhoea, infective diarrhoea, inflammatory bowel diseases," imtable bowel syndrome and colon cancer. The organisms used in probiotics are known as Bio-TherapeuticAgents.

Bowel Microflora
Gut is sterile at birth. Bacteria start appearing as soon as the baby starts feeding. These bacteria comprise the bowel flora. The composition of bowel flora also depends on age, race, and diet of the person. The actual number of the bowel bacteria varies in different parts of the digestive system.The oesophagus contains the bacteria swallowed with food. But very few of them survive in stomach acid. The no. of bacteria increases as we move down the alimentary tract. There may belO million bacteria/ml offaecal fluid. Examples of Intestinal Flora
· Lactobacillus acidophilus
· Saccharomyces bulardi
· Lactic acid bacillus
· Lactobacillus GG
· Bifidus longum

Due to various drug therapies or improper diet intake or due to infectious conditions, the bowel microflora balance may get disturbed. They must be replaced by using probiotics.

Advantages and Investigative Results of Probiotics
1. Probiotics produce important nutrients, eliminate toxins, protect food from putrefaction and enhances the body immune system. Lactobacillus bulgaricus helps by suppressing the toxin production of putrefactiv bacteria in intestine.
2. The bacteria present in administered probiotics synthesise vitamins for the host consumption. They also produce antibiotics that can kill foreign species of bacteria invading ourgut. Lactobacilli produce Lactolin, an anti-biotic like substance and Lactobrevin, Lactocidin, Lactobacin, acidolin, acidophilin which have counter action on S.aureus. Lactobacillus bulgaricus acts upon enterotoxic action of E.coli. L.bulgaricus prevents coliform associated diarrhoea. Lactobacillus acidophilus produce Koumiss, Kefir, which are anti-bacterial substances which inhibit the growth of E.coli. They have bacteriostatic or bactericidal action on E.coli, S.aureus B.subtilis, B.cereus and Mycobacterium tuberculosis.
3. Lactobacillus organisms like Lactobacillus GG has proven to be particularly valuable in preventing intestinal problems. They produce antibacterial substance that can kill E.coli, Streptococcus and Salmonella species.
4. Probiotics are more effective in treatment of viral diarrhoea,than bacterial diarrhoea suggesting immune enhancement as mechanism of their action.
5. L.GG has already shown to reduce the incidence colon cancer chemically induced colon cancer.Continuous interactions occur in the gut between the microflora and the immune system.

Mechanism of Action of ProbioticsThe recent experiments on probiotics have proven that they show the therapeutic effects by the following mechanisms.
1. They reduce the activity of some carcinogenic microorganisms.
2. They enhance the resistance to infection like diarrhoea, Irritable bowel syndromeeto.
3. Strengthen the activity of intestinal microflora against some allergic reactions.

For example, Lactobacillus species show their action by following ways on diarrhoea-affected person.
1. Probiotic agents acidify the gut lumen.
2. Produce antimicrobial substances. (like Lactolin, Koumiss, Lactobrevin) and act on pathogenic microorganisms like E.coli, S.aureus.
3. Inhibit the pathogenic bacterial adhesion to intestinal mucosal surface.
4. The probiotics act either by the cytokine production or by immunomodulation.
5. Decreased bacterial translocation and altered mucosal barrier.

Probiotics-Key Role in Treatment of Diseases
Probiotics have been used predominantly in the prevention and treatment of diarrhoea. They have shown to reduce the duration and severity of viral diarrhoea, specifically rotavirus in infants.They have also shown to reduce the risk of incidence of travellers diarrhoea, antibiotic-associated diarrhoea. The potential use of probiotics include control of inflammatory diseases like Crohnsdisease or ulcerative colitis.

Frequently Employed Probiotic Strains
Dozens of microorganisms have been shown to have desirable probiotic qualities in vitro. However, ingested bacteria were normally killed in stomach. A small no, of strains have been shown to colonize the human GIT in clinical trials.

Probiotic Strains : These are clinical strains which show the following positive characters.
1. In vitro adherence toepithelial cells.
2. In vitro antimicrobial ativity.
3. In vitro resistance to bile, hydrochloric acid and pancreatic juice.
4. Anti- carcinogenic activity in clinical trials.
5. Immune modulation or stimulation in human clinical trials.
6. Reduction of intestinal permeability in human clinical trials.
7. Colonization of the Gl tract inhuman clinical trials.

Implantable Strains: Any microbial strain native to the GIT of man shown to survive passage through the GIT (appear live in stools) or persist on biopsies of Gl mucosa after cessation of feeding are called implantable strains.Example: Lactobacilli, LGG

ldpal Charactpristir.s of Prohiotic Strains
1. They must be of human origin
2. They must be non-pathogenic
3. Acid and bile tolerant. They should remain viable during processing and transit through the gastrointestinal tract.
4. Ability to withstand technological processes and should remain viable during the food shelf-life period.

Clinically Important Probiotic Strains
Commercially Available Probiotic FormulationsThe commercially available probtotics have main components as soil organisms (HSO). These organisms are naturally occurring in colony forming and in non-mutated forms which are collected from unpolluted soil and plants. These HSO are of nutrient rich superfoods orovidina vitamins, minerals, traceelements etc. These strains are made dormant using the microflora delivery system. They are delivered directly into GIT where they multiply. Composition of Primal Defense, a Probiotic PreparationHso Probiotic Blend 1 billion Cfu
Lactobacillus acidophilus
Lactobacillus caucasicus
Lactobacillus ferment!
Lactobacilius helveticus
Bifidobacteria bifidum
Bacillus subtilis
Lactobacillus brevis
Lactobacillus leichniannii
Lactobacillus bulgancus
Lactobacillus lactis

Action ofHSO (Homeostatic Soil Organisms)
Impervious to stomach acids and the digestive process, the micro organisms move through the stomach to the intestinal tract where they form colonies along the intestinal walls. HSO multiply in intestine and compete with harmful bacteria, and yeast forreceptor sites. Once established, the organism quickly begins producing the proper environment to absorb nutrients and helps to re-establish the proper pH. Ø HSOs work from the inside of intestines dislodging accumulated decay on the wall and flushing out waste. HSOs break down hydrocarbons, a unique ability to split food into its most basic elements allowing almost total absorption through the digestive system. This increasesoverall nutrition and enhances cellular devetopment. HSOs produce specific proteins that act as antigens, encouraging the immune system to produce huge pools of uncoded antibodies. HSOs are very aggressive against pathological molds, yeasts, fungi, bacteria, parasites and viruses. HSOs work in symbiosis with somatic (tissue or organ) cells to metabolise proteins and eliminate toxic waste. HSOs stimulate the body to produce natural alpha interferon. Alpha - interferon is a potent immune system enhancerand a powerful inhibitorof viruses.

Precautions to be taken During Probiotic Therapy

1.No Chlorinated Water: Chlorine can kill beneficial bacteria in probiotics. So, Chlorine water should not be used during the therapy.
2 Antibiotic Drugs: Antibiotics may kill beneficial bacteria, which are administered into alimentary tract in the form of probiotics, although some strains may survive. Hence antibiotics should be avoided during probiotic therapy.
3. No Herbal Parasite Killers : Some herbal parasite killers may interfere with the homeostatic soil organisms and may decrease their ability to colonize.
4. Decreased Sugar and Processed Carbohydrates: These sugars may feed the parasites in alimentary canal and contribute to yeast overgrowth. This leads to imbalance among intestinal flora.

Increased Uptake of Fresh Vegetables: This will facilitate proper balance of intestinal flora.Increased Uptake of Water: Water helps in flushing out the toxins from body and provide proper adsorption of intestinal flora to the gut walls.
Lactobacillus plantanim Lactobacillus casei Bacillus lichenformis Saccharomyces boulardi

Nanotechnology For Coating

In coating industry, nanotechnology are increasingly being viewed as next generation effective method. Whether it is abrasion or problems due to uv light- nanomodified paints have shown great potential in overcoming this problems. Nanotechnology, deals with subatomic structures having dimensions in nanometer scale (0.1-100nm). In other words, nanotechnology is a scientific tool for developing materials by controlling atoms and molecules.

Some of the unique properties of nanoparticles are:
- Sizes of nanoparticles are smaller than wavelength of visible light.
- Vander wall force, electron resistance force and magnetic force are more dominant than gravitational force.
- The surface-to-volume ratio of nanoparticles makes modification in nanoparticles possible.
- It is Chemical and heat resistant.

NANOTECHNOLOGY IN THE COATING INDUSTRY
The coating industry is showing signs of growth globally. Today, Coating not only serve the purpose of beautification but are also a mean of protecting valuable metals from corrosion. Hence, nowadays more money is invested in research and development sector.

In coating industry ,Nanocoating can be applied in many ways by procedures such as chemical vapour phase deposition, physical vapour phase deposition, electrochemical deposition, sol-gel methods, electro-spark deposition, and laser beam surface treatment. The appearance and usefulness of nanoparticles have brought about several advantages and oppurtuinities to the paints and coating industry.

Some of the advantages of nanotechnology include
- Better surface appearance.
- Good chemical resistance.
- Anti-reflective in nature.
- Good adherence on different type of materials.
- Optical clarity.
USE OF NANOCOATING
Nanotechnology has been used to for coating of automotive engines, aircraft engines, cutting and machine tools, and equipment for manufacturing ceramic parts.

Self Cleaning Window Glass Products
Two major glass manufactures, Pilkington and PPG, have come out with self-cleaning window glass products with a surface layer of nanoscale titanium dioxide particles. The self cleaning window glass product is uses the property of UV rays in sunlight When the particles interact with ultraviolet rays in sunlight it starts loosing dirt and with water, UV rays distributes dirt evenly across the surface. As a result, most dirt gets washed off easily without streaking whenever it rains.

Making use of hydrophobic and oil repellent property of nanoparticles
When nanoparticles is coated on surface of water or other solvents, the surface area of water increases, and hence, decreases the surface tensions of a surface. E.g. DELETUM 3000TM, a unique paint, which has oil and water repellency, impressive scratch and UV resistance could be used on stone, brick, mortar, glass, plastics and other materials, as well as the potential application in clean rooms, surgery facilities and a number of forensic needs.

Use of Nanoparticles for high performance coating
When Nanoparticles are produced by standard synthesis methods such as sol-gel route. They leave number of either unreacted or still- active chemical groups on their surfaces that can be used for various chemical reactions. In other words, nanotechnology is used to create and arrange nanopigments by altering electric field due to which paints change its colour as a function of voltage. Using this method, research group at Universidad Nacional Autónoma de México has produced nanobiomaterials. Nanobiomaterial are nanoparticles for metal ion removal in polluted water. It is also used for coating purpose, which provides protection ranging from anti-corrosion, anti-staining of pure silver objects, caries-resistance for human teeth, etc.

Nanopowders
Nanopowders that are made by using nanotechnology are currently used to manufacture a wide variety of products such as engine components, electronic devices, cosmetics, machining tools, medical/surgical instruments, nutraceuticals, paints, coatings, optics, magnetic devices, semiconductors, and graphitic parts. The manufacture of these involves grinding common, safe inorganic compounds such as zinc oxide (used in sunscreens) into a powder of ultra fine nanoparticles.

Future of nanotechnology
Nanotechnology is going to bring the current technology to a new height. . Computers will compute faster, materials will become stronger, and medicine will cure more diseases. Nanotechnology, which works on the nanometer scale of molecules and atoms, will be a large part of this future, enabling great improvements in all these fields. Advanced nanotechnology will work with molecular precision, making a wide range of products that are impossible to make today. For example nanoparticles can be used to produce hybrid coating that will provide coating systems, which can withstand various environment impacts.

Conclusion
Nanotechnology is going to uplift coating industry to new level. Its useful properties like corrosion resistance, UV stability, gloss retention, chemical and mechanical can be used for modification of product in required pattern. However, nanocoatings also have some limitations, like agglomeration of nanoparticles, hardening of ultra fine particles, etc. but the benefits of nanocoating definitely out-weigh its drawbacks.

Golden wave in detergents

The largest segment within the global industrial enzyme market is the market for technical enzymes, estimated at around uss 980 million in 2002. In the technical enzymes category, detergent additives make up for nearly two-thirds of the market. These enzymes are used as functional ingredients in laundry detergents and automated dishwashing detergents. This article gives an overview of the detergent enzymes industry and discusses its manufacturing and downstream processing.


The original idea of using enzyme as detergents was described in 1913 by Dr Otto Rohm, who patented the use of crude pancreatic extracts in laundry pre-soak compositions to improve the removal of biological stains. In the same year, the first enzymatic detergent named Burnus was launched, but was not popular because of its own limitations. Subsequently, Bio- 40 - a detergent containing a bacterial protease was produced in Switzerland and launched in the market in 1959 and it gradually became popular. In the period from 1965 to 1970, use and sale of detergent enzymes grew very fast. In 1970, the use was distorted due to dust production by formulations leading to allergies to some workers. This problem was overcome in 1975 by encapsulating the granules of enzyme. From 1980s to the 1990s, several changes took place in the detergent industry like development of softening through the wash, development of concentrated heavy-duty power detergents, development of concentrated or structured or non- aqueous liquid detergent (Ee et al, 1997).

Detergent enzymes
Presently, detergent enzyme has become an integral part of detergent formulation. A look at the market share of detergent enzyme indicates it to be very high in comparison with other enzyme applications. Enzymes that have to be used as detergent composite must possess the following characters:
·Stability at temperature over a broad range of 20C to 50C and even above
·The optimum pH should be in alkaline or higher alkaline range
·It should be detergent compatible
· It should have specificity towards different proteins

Major detergent enzymes include proteases, amylases, lipases, cellulases, miscellaneous enzymes such as peroxidases and pullulanase. A recent trend is to reduce this phosphate content for environmental reasons. It may be replaced by sodium carbonate plus extra protease.

Proteases
Proteases were introduced in the market in 1959 in the detergent Bio-40, produced by Schnyder Ltd in Switzerland. Most powder and liquid laundry detergents in the market, today, contain proteases. Proteases are of two types:
· Alkaline protease from Bacillus licheniformis, having optimum pH 8, for eg, liquid laundry product, (pH 7- 8.5), commercially known as Alcalase -Novonordisk Optimase- Genencor Inter
· High alkaline protease from Bacillus alkalophilus and Bacillus lentus, having an optimum pH 10. For eg, powder laundry products, automatic dish washing formulations, known by trade names of Savinase-Novo Nordisk, Purafet- Genencor Inter. Proteases enhance the cleaning of protein-based soils, such as grass and blood by catalyzing the breakdown of the constituent proteins in these soils through hydrolysis of the amide bonds between individual amino acids. In the case of serine endopeptidase, it contains a catalytic triad of amino acids at the active site;
· An aspartyl residue containing ß-COO¯
· A histidine containing the imidazole group
· A serine residue with p-OH as the functional group

The serine hydroxyl group functions as a potential nucleophile, where as both the aspartyl and histidine functional groups behave as general base catalysts facilitating the hydrolysis process.

The serine group initiates the nucleophilic attack on the peptide bond to form a tetrahedral intermediate, which undergoes an active hydrogen transfer, facilitated by both the histidine and aspartyl residues. The net effect of the addition of water across the bond generates the original protein. The protease hydrolysis involves the transfer of electrons between the amino acids at the active site and substrate. For proteases the three-dimensional arrangement of the catalytic triad is required for the enzyme to be active. Disturbances in the confirmation are likely to affect enzyme efficacy and therefore cleaning performance.

Limitations of proteases
· These were susceptible to oxygen bleaches and calcium sequestrates. But now, stable protease can be obtained
· Oxidative attack by peroxides or per acids on the methionine residue adjacent to the catalytic serine results in nearly 90% loss of enzyme activity. However, replacing methionine with oxidatively stable amino acids like alanine improves stability of enzyme towards oxygen bleach (Boguslawski et al, 1992)
· Protease substilisin requires at least one calcium ion, which maintains three- dimensional structure of enzyme. However, calcium- sequestering agents used in many laundry procedures to control water hardness can remove this calcium resulting in the decreased thermal and autolytic stability. This can be corrected by the introduction of negatively charged residues near the calcium-binding site, which increases the binding affinity of enzyme for calcium and results in improved stability towards calcium sequestrants (Krawczyk et at, 1997)
· Protease has limited applications towards the detergency of wool and Constituent silk, because of the proteinaceous nature of these fibres
· Proteases are added in an encapsulated or granulated form, which protects them from other detergent ingredient and eliminates the problem of autolysis or proteolysis of other enzymes. In aqueous detergent formulations, protease inhibitors show a preventive effect of avoiding contact of the protease molecules with each other as well as other enzyme molecules. This effect gets nullified on dilution and enzyme molecules are free to act on stains (Krawczyk et al, 1997)

Amylases
Amylases facilitate the removal of starch-based food soils, by catalyzing the hydrolysis of glycosidic linkages in starch polymers. Generally, starch-containing stains are of chocolate, gravy, spaghetti, cocoa, pudding, etc. Amylases can be classified as:

a-amylases: These enzymes catalyze the hydrolysis of the amylose fractions of the starch under hydrolysis of the glycosidic bonds in the interior of the starch chain. The first step in the reaction is called as endoreaction & leads to oligosaccharides, where short chain water- soluble dextrins are produced.

ß-amylases: These enzymes acts on dextrins from reducing end and forms maltose units.
Amyloglycosidases: These enzymes act on the dextrin or maltose units and forms glucose units.
Pullulanases or isoamylases: These degrade starch directly into linear dextrins for they also attack ci-1, 6 glycosidic bonds.

a-amylases are mostly used for detergents, although recently other carbohydrate cleaving enzymes such as pullulanases or isoamylases have also been described for this application. a-amylases bring about the primary hydrolysis of starch into the oligosaccharides and dextrins. Currently, these enzymes are produced from bacteria. Bacillus subtilis. Bacillus amyloliquefaciens, and Bacillus licheniformis. These are available under the trade names Maxamyl- Genencor Int or Termamyl -Novo Nordisk.

Lipases
Tomato-based sauces, butter, edible oils, chocolate and cosmetic stains are very difficult to remove as they form due to greasy food stains. Body soils, sebum and sweat on collars, cuffs and underarms, are generally composed of a mixture of proteins starch pigments and lipids. Lipases hydrolyze the water insoluble triglycerides components into the more water-soluble products as monoglycerides, diglycerides, free fatty acids and glycerol. The Novo Nordisk launched the first lipase product in 1987. They transferred the lipolase gene into the fungus Asper6yillus oryzae for industrial production, Genencor followed in 1993 with lumafast (Pseudomonas menocina) and Gist-Brocades in 1995 with Lupomax (Pseudomonas alcaligenes).

Currently, the known sources of lipases include mammalian lipases (human pancreases/colipases), fungal (Rhizomucor mehei, Humicola lanuginose, etc), yeast (Candida rugosa, Candida antartica), bacterial lipase (Pseudomonas glumae, Pseudomonas aeroginosa, Chrobacterium viscosum) (Ishida et al, 1995).

Lipases possess a catalytic triad that is similar to the serine proteases of trypsin and subtilisin type. Hence, these are also called as serine hydrolysate lysate. Lipases can decompose a fatty stain up to 25%, which then can be removed very easily because of the hydrophilic character (Dorrit et al, 1991). It is generally thought that lipases get adsorbed on to the hydrophobic stain during the washing period. And, during the drying cycle when the water content is decreased, the enzyme is activated and can hydrolyze triglycerides in the stain. This facilitates the removal of stain in the next wash cycle (Dorrit et al 1991). The enzyme also has stability over a broad range of temperature 30C to 60C. These novel alkaline lipases also retained 100% activity in the presence of strong oxidants.

Biotechnology The Next Big Wave

The term biotechnology encompasses any technique that uses living organisms (e.g., microorganisms) in the production (or) modification of products. Historically the classic definition of biotechnological drugs was that of proteins obtained from recombinant DNA technology. Foremost, recombinant DNA and monoclonal antibody (MAb) technologies are providing exciting opportunities for new pharmaceuticals development and new approaches to the diagnosis, treatment and prevention of diseases. The revolution in biotechnology is a result of recent research advancement in intracellular chemistry, molecular biology, recombinant DNA technology, genetics and immunopharmacology. Clearly biotechnology has established itself as a main stay in pharmaceuticals research and development.

The transition towards molecular medicine has already begun. As biotechnology advances, and growing numbers of cancer related genes are identified and cloned, -the therapy with biotechnological products will eventually supplement chemotherapy for many malignancies. More than 100 gene-therapy trails are in progress, genetically engineered for specific toxicity to cancer cells and are under clinical trials. A total of 54 biotechnological derived medications have been approved since human insulin in 1982. The commercial success of biotechnology has spurred the entry of many additional products into development pipelines.

Biotechnology allows for the development and production of new substances that were previously beyond the capacity of traditional technologies. This includes the design and production of new drugs with greater potency and specificity andconsequently, fewer side effects. One example of this is the treatment for multiple sclerosis. Biotechnology offers a greater control over the manufacturingprocess, allowing significant reduction of risks of contaminationthrough infectious pathogens. An example is blood products used to treat Hemophilia. Biotechnology offers better product targeting for specificdiseases and patient groups, through the use of innovative technologies, in particular, genetics. Examples includE treatments for:Rare diseases, Lysosomal storage disorders & Cancer. Some products are not naturally created in sufficient quantities for therapeutic purposes. Biotechnology makes large-scale production of existing substance possible. One example of this is in the field of diabetes treatment. There are numerous techniques that are utilized to createbiotechnological products. These include
Recombinant DNA technology
Monoclonal antibody technology
Polymerase chain reaction
Gene therapy
Nucleotide blockade (or) antigenic nucleic acids
Peptide technology

Recombinant DNA ( r DNA )
DNA, deoxyribonucleic acid, has been called "the substance of life". It is the DNA that constitutes genes allowing cells to reproduce and maintain life. In 1950s, James D. Watson and Francis H.C. Crick postulated the structure of DNA. Watson and Crick described their model of DNA as a double helix, two strands of DNA coiled about itself like a spiral staircase. It is now known that the two strands of DNA are connected by the bases adenine, guanine, cytosine and thymine(A,G,CandT). The ability to selectively hydrolyze a population of DNA molecules with a number of endonucleases promoted a technique for joining two different DNA molecules termed recombinant DNA. Recombinant DNA technology allows the removal of a specific piece of DNA out of larger, more complex molecule. Consequently, recombinant DNAs have been prepared with DNA fragments from bacteria combined with fragments from humans, viruses with viruses and so forth. The ability to join two different pieces of DNA together at specific sites within the molecules is achieved with two enzymes, a restriction endonuclease and a DNA ligase.

Monoclonal Antibodies
When a foreign body or antigen molecule enters the body an immune response is initiated. This molecule may contain several different antigenic determinants and lines of B-lymphocytes will proliferate, each secreting an immunoglobulin (i.e., an antibody) molecule that fits a single antigenic determinant or part of it. Monoclonal antibodies are produced as a result of perpetuating the expression of a single B-lymphocyte. Through the development of hybridoma technology developed from the Kohler and Milsteins research it has been possible to produce identical, monospecific antibodies in almost unlimited quantities.These are constructed by the fusion of B-lymphocytes, Monoctonal antibodies icniato~ fnr ri iltiwatin stimulated with a specific, antigen, with immortal myeloma cells.The resultant hybridomas can then be maintained in cultures and are capable of producing large amounts of antibodies.

Polymerase chain reaction
Polymerase chain reaction is a biotechnological process whereby there is substantial amplification (i.e., over 100,000 fold) of a target nucleic acid sequence (gene). This enzymatic reaction occurs in repeated cycles of a three step process. First, DNA is denatured to separate the two strands. Then a nucleic acid primer is hybridized to each DNA strand at a specific location within the nucleic acid sequence. Then a DNA polymerase enzyme is added for extension of the primer along the DNA strand to copy the target nucleic acid sequence. Each cycle duplicates the DNA molecules copied. This cycle is repeated until sufficient DNA sequence material is copied. For example, 20 cycles with a 90% success rate will yield a 375,000 amplification of a DNA sequence.

The simplest representation of this polymerase chain reaction is represented as below.
Example of the technique of DNA cloning into a plasmid.:Insertion of the gene coding for insulin into a bacterial plasmid, which in turn carries the gene into a replicating bacterial cell that produces human insulin.

Plasmid: Plasmids are small circles of DNA found in bacteria cells, separate from the bacterial chromosome and smaller than it. They are able to pass readily from one cell to another, even when the cells are dearly from different species, far apart on the evolutionary scale. Consequently, plasmids can be used as vectors, permitting the reproduction of a foreign DNA by using the bacterial replicating system.

cDNA: Human genes composed of coding and non-coding sequences. The copy of the coding sequences is called cDNA. It can be obtained from the reverse transcription of messenger RNA. The transcription and translation of the insulin cDNAwill allowthe production of a functional insulin molecule.

Transfer of the Insulin gene into a plasmid vector
- The plasmid is cut across both strands by a restriction enzyme, leaving loose, sticky ends to which DNA can be attached.
· Special linking sequences are added to the human cDNA so that it will fit precisely into the loose ends of the opened plasmid DNA ring.
· The plasmid containing the human gene, also called a recombinant plasmid, is now ready to be inserted into another organism, such as a bacterial cell.

Cloning the Insulin gene
The recombinant plasmids and the bacterial cells are mixed up. Plasmids enters the bacteria in a process called transfection. With the recombinant DNA molecule successfully inserted into the bacterial host, another property of plasmids can be exploited - their capacity to replicate. Once inside a bacterium, the plasmid containing the human cDNA can multiply to yield several dozen copies.When the bacteria divide, the plasmids are divided between the two daughter cells and the plasmids continue to reproduce. With cells dividing rapidly (every 20 minutes), a bacterium containing human cDNA (encoding for insulin, for example) will shortly produce many millions of similar cells (clones) containing the same human gene.

Gene therapy
Gene therapy is a process in which exogenous genetic material is transferred into somatic cells to correct an inherited or acquired gene defect. Also it is intended to introduce a new function or property into cells. These include common and life threatening diseases like cystic fibrosis, hemophilia, sickle cell anemia and diabetes. Research has examined the use of a "self renewing" stem cell population for the therapeutic transfer of genetic material. Stem cells can self renew themselves. As an example, a patient cells (e.g., T-lymphocytes) are harvested and grown within the laboratory. The cells receive the gene from a viral carrier and start to produce the missing protein necessary to correct the deficiency. The genetic cause of numerous primary immune deficiency disorders has been discovered and described. As a result, gene therapy can now be used as an alternative therapy.

Hunt for excellence Process industries raise unique problems

The pace of change continues to quicken in the process industries. Sectors like chemical, oil, gas, pulp & paper, steel and metal processing are rethinking, reorganizing and reengineering their businesses so that they can outperform their competitors and become more profitable. However, problems lurk in these sectors in the form of quality. Most of the conventional approaches to quality improvement are often found to take a different route.

Process industries are generally engaged in performing physical and chemical changes on materials. In order to build the desired properties, performance characteristics and economics into the finished products, processes must be controlled. Other aspects that seek attention include properties of raw materials, characteristics of the unit processes & operations, temperatures, pressures, concentrations and methods of measurement.

Three distinctive problems
In addition to conventional quality problems faced by process industries, they experience some unique ones that are quite different to those of batch mode manufacturing. First, the measurement methods by themselves could be miniature chemical, physical, or biological processes requiring control. Second, the testing time could take relatively long compared to batch reaction time. Hence, control decisions should be anticipated. Also, product specifications may not fully define performance under widely varying customer conditions.

As in the case of most industries, the process industries share problems of rapid product obsolescence and the need for prompt conversion of research effort into profitable production. The quality needs of the marketplace change rapidly and, often, unpredictably. For example, matters related to ecology and consumer safety may require development of different fuels & lubricants for automobile engines; flame-retardant paints & textiles; biodegradable detergents, fertilisers, and insecticides.

Since some of its end products - particularly drugs and food - are directly consumed, the process industries have become further involved in how products from one sector (eg, fertilisers, pesticides) influence the composition & properties of products in another sector (food, drugs). Further, the ability to measure minute quantities has raised questions about the effect of pesticide residues accumulating in fish and drinking water from surface water runoff of fertilized fields.

And third, human response to materials - either in process or as finished goods - varies widely both in its intended usage or in accidental exposure. This necessitates extensive toxicity studies and clinical trials, which are time consuming, expensive, and frequently complex. Hence, statistically designed experiments have become the order of the day.

Traditionally, responsibilities for achieving quality controls are concentrated in three broad areas:
· R&D laboratory
· Analytical or control laboratory
· Manufacturing plant

What is control?
Quality control can be defined as a managerial process during which one can:
· Evaluate actual performance
· Compare actual performance to goals
· Take action on the differences

The concept of control is holding the status quo - keeping a planned process in its planned state so that it is capable of meeting the operating goals. Unfortunately, a process that is designed to meet operating goals does not remain unaffected. In fact, all sorts of events intervene to damage the ability of the process to meet the predetermined goals. The main purpose of control is to minimise this damage either by prompt action to restore the status quo, or better yet, by preventing the damage from happening in the first place.

The control process takes place by use of the feedback loop and addresses sporadic problems.
· The sensor evaluates actual performance
· The sensor reports performance to an umpire
· This umpire also receives information on what the goal or standard is
· The umpire compares actual performance to the goal. If the difference warrants action, the umpire energizes an actuator
· The actuator makes the changes needed to bring performance in line with the goals

Scope for improvement
Improvement means the organised creation of beneficial change; the attainment of unprecedented levels of performance. This involves recognizing and eliminating chronic problems.

Quality improvement is considered necessary for two dimensions of quality: product features and freedom from deficiencies. To maintain and increase sales income, companies must continually evolve new product features and new processes to produce those features. Customer needs are a moving target. To maintain costs at a competitive level, companies must continually reduce the level of product and process deficiencies. Competitive costs are also a moving target.

The way ahead
Most of the conventional approaches to quality improvement are directly applicable to the process industries. However, the tools for analysis must be quite sensitive, since quite a few of the industrial products are high- volume, low- profit (commodity) materials. In such cases, a large amount of money rest on small differences in yield. Also, the tools of analysis that deal with such small differences must be flexible enough to accommodate numerous variables, many of which are non-linear, and may interact with other variables as well. Today, such tools exist, and the industry has progressed significantly by using these to improve yields and controls.

Wasteless processing Solution to cleaner chemical production

Wasteless processing seems to be the buzzword in todays world of industrialization as it guarantees a healthy and pollutant free world. Hence, recycling of possible components of a chemical system at the right place, at the right time and in the right proportion at the right conditions is a welcome step towards achieving this goal. This also results in maximum utilization of the so- called critical components of a system, attainment of the highest possible yield out of a reaction, accompanies the minimum production of unwanted wastes, assures non-wastage of precious components, etc. Undoubtedly, the design of wasteless processing, starts with recycling and ends with providing a healthy life to all.


In chemical engineering, any given system constitutes plenty of features. And, when a few of these features are combined to yield a better output, the complication of design grows exponentially. Most often, the unit operations involved in these systems are generally unique in nature. But while functioning in coordination with one another, these operations become inter-related. In such a scenario, it is a welcome move to bring in an inter-relationship between these operations as long as the effects are easily manageable and the yield is positive.

At one stage or the other of such a system, this inter-relationship also brings in an interaction criterion. Similar interaction and inter-relation can be detected not only in the unit operations but also equipment items that are used to carry out the said operations. This leads to a knitted network of the equipment chain. This again calls for the unity of the action they are subjected to by virtue of the purpose. The number of functions thus getting performed is only adding up to the complexity of the basic functioning. All these are digestible as long as the end results are going to be promising. Unfortunately, this is not the case always. Quite often, one may face aspects that are competitive to the basic function intended for, when the system was originally designed. This will tend to render the system with a temporary brake effect. However, in majority of instances, these minor hurdles can be better handled via thoughtful innovative or sometimes tricky application of principles that will resist or remove or alter the effects.

The functioning of any chemical engineering system can be depicted by means of process operators, as these process operators interact between themselves to complete the system.

Mixing, chemical conversion and separation are examples of basic process operators that make a system. In order to ensure, enhance, retard or control the functioning of these process operators, many secondary process operators are employed. Also known as auxiliary process operators, some of the examples include:
· Compression
· Expansion
· Heating
· Cooling
· Phase transformation, etc

Representation
Flow diagrams are used to conceptualize the way process operators function in any chemical engineering system design. These can be further classified as:
· Process flow diagram
· Energy flow diagram
These diagrams, also known as generalized models help in presenting each element as a collection of several unit operations, while trying to expose the physical and chemical highlights of the system as a whole. Adopting and incorporating either, any or a combination of the following criteria can further enhance the efficiency of a system:
· Performance improvement of the basic process operators Introducing variances between the process flows
· Incorporating additional basic process equipment at appropriate places
· Incorporating additional auxiliary process factors at appropriate places
· Incorporating additional process flows at appropriate places
· Relieving the unwanted intermediate produce at their earliest possible retention time
· Incorporating a process de-routing at the onset of an intermediate produce
· Returning the intermediate produce back into the system at appropriate places etc

In fact, the list may be endless, as the possibilities become numerous. These are due to the ways and means available for interconnecting the process operators and subsystems of a system.

The flow scheme can be classified based on how the component streams are designed to continue in a pre-defined path. These include:
· Series flow scheme: If the stream of components leaves a given equipment only to enter as the input to the one next to it, this is termed as a series flow scheme. A by-pass flow system is also considered to be an extension to this simple scheme.
· Parallel flow scheme: A parallel flow scheme is one in which a single feed is diverted to produce two or more intermediate produce, but with an aim of combining all these to get a single output product only.
· Cross flow scheme: A cross flow scheme is flow streams of intermediate products. These are generally a combination of one or more of the series or parallel or cross or recycle flow schema.
· Recycle scheme: A recycle scheme speaks about the usage of a return process stream from one equipment to another, or from one stage of equipment to another of the same equipment.

Structure
If one draws a box around a given complete process, the net result will be only the feed streams and product streams. This representation looks too simple to make clear the intricacies of such an attempt in practice. This representation will aid as a major source in understanding the variables incorporated in design, that may affect either slightly or adversely the overall material balances of the process or system as a separate identity. The necessity and importance of the material balances also need to be over emphasised. This is due to the fact that the material balances form the dominant part and serves the purpose of any system flow diagram. Moreover, it is not worth spending much time in investigating the variables appearing in the design, especially where the cost of the products or by-products is much less than that of the raw materials. Consideration of any such input - output structure of the flow sheet and the decisions that affect this structure take priority before consideration of recycle system designs.

Any flow sheet can be simplified successively via better designs. This leads to a smoother and efficient method for solving inherent, derived or anticipated problems in the system even before the project implementation begins. Besides, process and system limitations can be envisaged and defined with great clarity on the drawing board itself. It also aids in developing systematic procedures without giving due consideration to any of the procedural details. However, the underlying structure could differ with the type of process.

Scheme
Recycling process involves the usage of a return process stream. If sets of reactions take place in a system, and that too at different temperatures or pressures or in the presence of different catalysts or accelerators, it implies the use of a number of reactors and complicated process flow schemes. Here, the purpose is to extract a majority of the work out of every single component and step of the process, before discarding anything as a waste. Thus, recycling helps wasteless processing to a great extent. However, it has to be introduced in the right fashion, at the right place and at right process conditions.

At least in a majority of cases, it is possible to associate a number of reaction steps with a reaction vessel. This opens the gateway to associate the number of feed streams with the particular reaction vessel that is meant to be the container for the designed reaction to take place.

In any process, the decision to incorporate recycling must be taken after strong and thorough consideration of the fact that no two components must be separated and then allowed to remix at the inlet of a reaction vessel.

One way of setting up a wasteless flow scheme that is closed circuit in nature and highly effective in end-results is using recycling. This process returns some of the stream from the exits to the process at different entry points, and back. Besides resulting in overall conversion of the product, this also ensures the complete utilization of the source materials and energy being. In addition, the condition under which the conversion proceeds towards completion also improves.

Classification
It is essential to clearly distinguish between the different recycling systems, viz: liquid and gaseous streams. The gaseous recycling requires expensive compressors, while liquid recycle systems requires only pumps (with the exception of the high-head, high-volume ones), whose costs are relatively lesser than that of compressors, furnaces or distillation units.

The usage of recycling, as far as chemical engineering is considered, is focused to serve only very few major causes. For example:
· In cases where energy is to be recuperated. Hence, if a better and to-the- maximum-extent utilization of the energy fed to the system is solicited, recycling comes handy
· In cases where the scope of utilization of inert solvents in a single or multiple reactions in the system is designed to the maximum advantage of the system
· Usage of catalysts in reactions in the system is required to the maximum extent
· When the cost of catalysts or any of the components used in the reactions is prohibitively costly
· Emissions to the outside has to be restricted or minimised to the maximum possible extent in any given system, as they are objectionable
· In cases where the operating conditions are optimized.

Contemporary diagnostic and control devices

Traditional diagnostics systems often require specialised hardware for monitoring a specific function. Besides, these have very limited connectivity to operations, engineering, or business functions within the plant. To solve these shortcomings, a new architecture capable of handling the volume and diversity of information is being used. This article explains the benefits of field-based architecture as it applies to advanced diagnostics and control in the field as a total solution to process automation.


Diagnostic and monitoring systems have been available for many years; however, they have been hampered by several fundamental limitations. Generally, diagnostic systems have been proprietary and application specific, with little benefit outside a narrowly defined piece of equipment or process. In addition, these have been too expensive to include as part of the overall process automation and monitoring system. Traditional diagnostics systems often require specialised hardware for monitoring a specific function. In addition, most monitoring systems lack real data from the process system, or the devices they are designed to monitor.

Finally, monitoring systems have very limited connectivity to operations, engineering, or business functions within the plant. User interfaces, databases, annunciation, and other functions are frequently proprietary and not interoperable. Solving these shortcomings requires a new approach to diagnostics and monitoring, and a new architecture capable of handling the volume and diversity of information needed to detect problems, isolate or localize the problem to a specific source, and determine the root cause so that the problem can be effectively addressed.

A field-based architecture combining advanced diagnostics and control in the field effectively addresses many of these limitations, and provides an unprecedented opportunity to improve financial performance.

Overview to field-based architecture
A field-based architecture uses the power of field intelligence to improve plant performance. Over the past years, combination of increased computing power in field devices, higher performance in device sensors, and a fully capable and standard communications protocol, foundation fieldbus, has enabled field devices to deliver a step increase in functionality and value. These factors may change the fundamental definition of a process automation architecture, and process control.

Field devices today can perform functions including but not limited to closed loop basic and advanced regulatory control and discrete control. These devices can also perform statistical process monitoring. In addition, these can detect and calculate both actual process variability, and theoretical minimum variability. Improved sensors can detect process conditions over a broader frequency range and with greater accuracy and repeatability than in the past. This information reveals fundamental process signatures such as drift, bias, noise, stuck, and spike. These signatures combined with process control information can then be used to detect a wide variety of equipment and process conditions ranging from remaining sensor life, or plugged impulse lines, to control abnormalities with flow, temperature and level loops, to operational or performance problems with process units. This value can only be extracted, only if the automation architecture can access and use the information in a coordinated way.

Traditional automation architectures are not capable of accessing, using, or delivering this functionality or value to the user. These architectures are designed to access a single value, or a value and basic status from a field device, and deliver that information essentially exclusively to the process control system for closed loop control, or the operator for viewing and manipulation. These are not designed to deliver control information like mode, alarms and alerts, setpoint changes, or other control information back to field devices for use or analysis.

The field-based architecture is fundamentally different in its ability to access, disseminate, and use this information. First, data analysis is partially or completely done within the devices themselves. This reduces the communications bandwidth requirements by orders of magnitude. Additionally, information available to the raw sensor, but not available to the system can be used to perform analysis that are not possible in any other way. Finally, by combining this information with process control actions such as mode, setpoint, load, and other changes, fundamental insights to the health and performance of the equipment and the process become available for the first time.

Architectural Suggestion
Signal validation techniques rely on accurate process information. Conventional centralised approaches can lack actual process information, and instead use mathematical models to infer process conditions. If the mathematical models do not reflect actual process conditions, then erroneous results will be unavoidable. In addition, time delays in delivery of information can mask actual process conditions. Smart field devices have the advantage of providing accurately time stamped information directly to the control system as anomalies develop. Using pattern recognition and statistical analysis methods field devices can now detect various process anomalies such as drift, bias, noise, spike, and stuck behaviors of each process. This enables operators to take corrective steps faster, thus avoiding conditions that could cause a process upset or shutdown.

Increasing economic efficiency is the most important task of various engineering groups and managers of industrial plants. Plant engineers traditionally approach efficiency through implementing optimum process control systems. However, an increase in economic efficiency cannot be reached simply by providing better control schemes to an operating plant. Improving plant availability and increasing efficiency come from early detection of anomalies, providing condition-based real-time maintenance that will result in improved plant availability. It is easy to determine the inefficiency of applying validatory schemes at that high level when we consider the signal linearisation, damping, and communication delays that masks the true readings of sensors. Even though readings are accurate by the time they reach these high-level sensor validation systems, the critical information hidden in the raw data has been lost.

Both these approaches depend on recognition of system condition and therefore directly depend on the quality of observed signals. However, most of the previous works on signal validation concentrates on cases where there is either hardware or analytical redundancy. For instance, if there are three or more sensors measuring the same process variable, one can implement a consistency checking or majority-voting algorithm for signal validation and anomaly detection.

In addition, model based techniques are heavily used when different types of measurements are available for the same process or the same system. Most of these techniques are based on modeling the normal behaviour of a sensor by auto regression time series and then monitoring its behaviour. These techniques for signal validation have been integrated with high-level control and data acquisition systems to provide assistance to plant operators.

Technique for assembling information
Advancements in communication and software technologies are now enabling the integration of islands of automation to provide an asset management system to provide a plant- wide view for the operator.

The advent of the DCS and smart instruments in the 70s and 80s has provided the foundation for automated process control that is pervasive in todays industry. The usage of process control has been attributed as having reduced the operating cost of process industry plants by 5-9%.

The advances in communications and software have now given birth to a new automation architecture most often referred to as the field-based architecture. The field-centered architecture is founded on the ability to network field devices, i.e. sensors, valves, smart motors, etc, with controls and asset management systems to provide for an integrated information system that can be used for control and automated maintenance. Since field devices are closest to the process and provide measurements on these processes, performing device or process related anomaly detection could be another task for intelligent field devices.

Technique for disseminating information
Traditionally, fault detection has been part of the control system, i.e. implemented as signal validation modules. Field devices could not handle the tasks that fault detection methodologies require due to the limited firmware capability of the older technologies. However, todays smart transmitters, using advanced silicon technology, are capable of providing more information richness and data analysis.

Process anomalies exhibit five statistical signatures common in all measurement and process types: pressure, temperature, flow, level and others. Using pattern recognition and statistical analysis methods field devices can now detect drift, bias, noise, spike, and stuck behaviors of each process; where:
·Drift: Sensor/process output changes gradually
· Bias: Sensor/process output shows a level change
· Noise: Dynamic variation in the sensor/process output is changed
·Spike: Sensor/process output is momentarily very high or low
· Stuck: Dynamic variation in the sensor/process output is decreased

The approach and the key features of the developed local fault detection technology make it applicable to broad ranges of industrial processes are:
· No additional hardware in the detector system is assumed
· No mathematical model of the process is necessary
· No mathematical model of the sensor is required

Advanced diagnostics approaches address: fault detection, fault isolation and root cause analysis.

Automation in Chemical Industry

The arrival of digital technology has changed the outlook of industrial working. Today with high-tech devices and equipments, it is becoming possible to achieve the impossible. Read on to know more about how a design software tool, specifically acquired for building 3d plant concept models helps in the early phase.

The chemical industry covers a wide range of activities like designing equipment and developing processes for manufacturing new materials on a large scale. These materials are used in diverse sectors like aeronautical space, biotechnology and IT, among others. This is in addition to fine-tuning processes, which make existing processes more cost-effective.

The domain of chemical engineering not only includes converting the laboratory and pilot scale plants to commercial plants but also involves designing reactors, distillation columns, pumps, heat exchangers, etc. This helps in implementing the mass and heat transfer processes required to convert raw materials to the end products. In addition to designing individual pieces of equipment, the plant as a whole is designed keeping good engineering practices in mind as well as mandatory guidelines laid down by American Petroleum Institute and other Chemical Process Industry (CPI) governing bodies.

Further, the process involves preparation of the plot plan to position process areas, utility areas and tankages. This provides access to movement of man and machine for raw material inputs, movement of intermediate and finished products between and without plant areas as well as for construction and maintenance.

Piping design and routing remains a major plant engineering activity in all CPI projects. Pipelines provide the means of transfer of all fluid process and utility materials such as steam, cooling water, compressed air, etc. Design of piping (selection of material, size and routing) has a major bearing on plant activities.

The design method
The pre-automation phase see engineers manually preparing equipment assembly drawings, structural / civil general arrangements equipment layout drawings, piping general arrangement drawings - floor wise plans and elevations, pipe rack routing, isometrics, etc. While design of individual equipment and structure requires specific domain knowledge, piping requires a high level of spatial detailing in addition to piping knowledge.

More than any other aspect of plant engineering, piping is most prone to design changes due to the requirements governed by piping stress analysis. Results of pipe stress analysis cause changes to pipe routing that in-turn often lead to a cascade of changes. During installation at site, it is often seen that pipe routing as planned by piping designers leads to interferences between piping and structures, equipment either directly or indirectly due to pipes crossing access areas for operation and maintenance of equipment (opening of manhole and body flanges, accessibility for viewing, etc) or instruments (viewing of flow/level/pressure/temperature indicators, sight glasses, operating bypass valves on control valves, calibration and maintenance of instruments) and also operation of valves. This leads to rework and rerouting of pipelines, leading to usage of extra pipe material, man-hours of effort and eventually delay in production deadlines.

The automation era
The introduction of plant design systems has made all this, a thing of the past. The biggest benefit of plant design automation is that disciplines like piping, equipment, structures, electric raceway, etc are created under the same umbrella of plant design. The multidiscipline pace allocation avoids all possibility of clashes or site reworks. This not only saves costly, reworks at site but also takes care of modification/alteration in one discipline, which results in a cascading effect on related disciplines. Advance features like space claims (soft clash) for equipment maintenance area, etc are defined in advance so that necessary error messages are generated even in the case of faulty modelling.

Almost all chemical industries involve huge amount of piping to handle various working fluids. Engineering of a chemical plant involves maximum man-hour for creation of isometric drawings. Isometric drawings are the 3-dimensional view of the plant piping, thus making the plant erection simple and easy to interpret. It is imperative to reduce this cycle time for isometric creation. Plant design solution also creates isometric drawing automatically. However, company standards need to be customised at least once, to give the correct output. These drawings are also intelligent as they have a link with the parent model. Hence, whenever the model is edited, isometric drawings get updated automatically.

Plant design & internet
The World Wide Web has created a revolution in the corporate world. It is now a known fact that the client server- based applications have been redefined by incorporating web-based tools. This has resulted in the development of any user-friendly tools, which are ideal for design and development in federated environment. In the chemical industry too, large EPCs are now sub- contracting to other vendors and the whole project is getting merged in plant design environment. Besides, during the construction stage, the client is apprised of the nitty-gritty of the project progress, with web-based tools. The client then collaborates with the developers to get the plant as per his own choice. The scheduling of the project is also done on the same platform. This helps in planning for the required materials.

Conclusion
Thus, plant design automation is an integral part of the chemical industry. Moreover, the progress in this industry is totally dependent upon the kind of automation tools that are incorporated. Web-based technology has also played a major role in enabling plant designers to keep a strong control on the progress and scheduling of the projects.