Cees Th. Smit Sibinga
IQM Consulting and University of Groningen, De Gast 46, 9801AE Zuidhorn, The Netherlands
Abstract: Transfusion of blood from human to human became a clinical reality around the turn of the 19th century with the discovery of the blood group antigens A and B, which allowed compatibility between the blood of the healthy donor and the recipient. During the past century the need for accurate documentation and archiving of blood group and compatibility data became an increasing issue. World War II provided a specific boost in the development with the introduction of the first primitive mechanical data processing allowing a more secure traceability and retrievability of data. Simultaneous development of better collection and preservation technologies allowed banking and transportation of blood. The introduction of machine learning in automation of equipment contributed to uniformity and standardization, followed by the development of deep learning algorithms to predict transfusion needs and compatibility, contributing to improved safety of patient care.
Keywords: artificial intelligence in transfusion medicine; blood supply and consumption; blood transfusion chain; history of transfusion medicine; vein-to-vein digital footprint
Author for correspondence: Cees Th. Smit Sibinga, IQM Consulting, The Netherlands, Email: c.sibinga@planet.nl
Cite this chapter as: Smit Sibinga CTh. Transfusion Medicine: From AB0 to AI (Artificial Intelligence). In: Linwood SL, editor. Digital Health. Brisbane (AU): Exon Publications. Online first 2022 Feb 22.
Doi: https://doi.org/10.36255/exon-publications-digital-health-transfusion-medicine
In: Linwood SL, editor. Digital Health. Exon Publications, Brisbane, Australia. ISBN: 978-0-6453320-1-8. Doi: https://doi.org/10.36255/exon-publications-digital-health
Copyright: The Authors.
License: This open access article is licenced under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) https://creativecommons.org/licenses/by-nc/4.0/
Transfusion medicine has come a long way. Over the centuries, there has always been a special, sometimes mystical attraction, to this elixir of life with its life-saving potential. Following a centuries long experimental period to apply blood transfusion, the actual history of current transfusion approaches and practices starts around the turn of the 19th century. The discovery of markers (antigens) on red cell membranes, identified as the AB0 blood group system, opened the gates to immune compatibility. Blood was collected in a citrate solution in open glass receptacles and closed with a flamed cottonwool stopper. Documentation was rudimentary. During World War I (WWI), flamed cottonwool stoppered retorts with collected blood was flown into military field lazarettes in France to be transfused to wounded soldiers, both allied and enemy. With the extension of blood group research and pre-transfusion matching, data started to be collected in a more structured way. Blood banks and transfusion services took responsibility for collection and storage of blood. The system was based on replacement of what had been transfused. World War II (WW II), preceded by the Spanish Civil war, initiated acceleration of developments, introduction of screw cap closed bottles, standardized preservation fluids, and structuring of the blood supply. Separation started, and plasma was freeze dried and tinned to allow transportation over larger distances to battlefield sceneries.
More accurate data collection, archiving and management took off supported by a first-generation primitive mechanical data processer, and processing equipment with first generation machine learning principles was developed. Around the year 2000, artificial intelligence (AI) slowly dripped into the extending field of transfusion medicine with rapidly growing data bases, processing, and testing. Thanks to space technology, mega capacity quantum computers became available allowing AI technology and digital foot printing to be introduced contributing to prevention of human errors, supporting an improved overall safety of the blood supply. This chapter provides an overview of the history of transfusion medicine, from its humble beginnings to rapidly evolving AI and digital health.
During the reign of the Pharaohs in ancient Egypt, it has been a royal custom to bath the brave wounded warriors in bull’s blood to allow an accelerated healing of the battle wounds. The rationale was a belief in the power of the bull that would be transferred through the blood to the wounded warrior and provide its strong and healthy healing capacities to cure the injured skin and muscles. The Roman writer Ovidius describes in his 7th book of Metamorphosis an early exchange transfusion approach to rejuvenate the old king Aeson. The old Vikings drunk the blood of seals and whales as a cure against epilepsy and scurvy, reflecting a primitive scientific thinking of the beneficial and healing use of human and animal blood. An old Hebrew manuscript discloses the use of blood as a fluid with special healing powers in leprosy victims. In 1692, when Christopher Columbus discovered an entirely new world—the Americas, Pope Innocentius VIII suffering from a chronic renal disease was advised by a mystical doctor from Rome to be transfused with the blood from three young and healthy men. The available old parchment discloses that such practice would save him from dying and give him back his youthful strength (1).
During the early Renaissance epoch in Europe, the Milanese scientists Hieronymus Dardanus and Magnus Pegelius from Rostock hypothesized with a certain vision that transfusion of blood from one individual to another should be feasible. However, during the following period of the 16th century, no further documents indicate research or progress to evidence their hypothesis. In 1615, the naturalist, PhD in Medicine, and philosopher, Andreas Libavius from Germany, proclaimed his strong plea for transfusion of blood and described in detail a method for transfusion using a silver catheter for an arterio-arterial shunt from donor to recipient (2). An important early 17th century milestone in the history of transfusion medicine was the academic experimental study and discovery in 1613, described in 1628, of the blood circulation by the advanced English court physician William Harvey in his famous monography ‘Exercitatio Anatomica de Modu Cordis et Sanguinis in Animalibus.’ (Figure 1) (3). The book initiated uncurbed speculations on the possibilities to transfuse blood and infuse medicines intravenously in humans. The Oxford physician and anatomist Richard Lower was the first scientist who demonstrated that blood transfusion could be lifesaving. In his 1666 experiment, he first almost exsanguinated a dog and then transfused blood from a healthy dog, causing complete recovery of the victim (4).
Figure 1. William Harvey’s Publication.
A year later in 1667, Richard Lower presented a first human experiment in which a man was hired for the sum of 20 Shillings by the Royal College to undergo within a month two intravenous lambs’ blood transfusions. The second transfusion did not provide a very cheerful outcome. At the same time in France, at the court of Louis XIV, the young physician and “most able Cartesian philosopher” Jean Baptiste Denis from Montpellier together with the surgeon Paul Emmerez performed a series of dog-to-dog transfusion experiments (5). When Denis was presented a severely ill young boy with fever and weakness due to the many traditional bloodlettings, he decided to transfuse the boy with lambs’ blood, which resulted in a miraculous recovery! Shortly after this éclat success, a second 45-year-old healthy male was successfully transfused, followed by the son of the Swedish Minister of Foreign Affairs, who fell seriously ill. Denis decided to treat him with two subsequent transfusions, and with good success. The report was published in the July 1667 edition of the Philosophical Transactions of the British Royal Society (6). The following patient transfused was a 34-year-old man who suffered from a tragic love affair. He received over a period of a couple of months several calf blood transfusions but started after the second transfusion to react with fever, pain in the lumps, increased pulse rate, sweating, dyspnoea, and excreting black urine. Denis carefully documented this event, for the first time in medical history uniquely describing a classical acute haemolytic transfusion reaction. The man survived, but when a few months later his mental condition again deteriorated, Denis decided to treat him with another transfusion, which unfortunately caused his death due to acute lethal haemolysis. Denis was accused of murder but during the Paris Châtelet trial plead not guilty. However, the conservative Paris University Sorbonne forbid further blood transfusion experiments. Also in England, further experiments were forbidden, followed by the anathema of the Pope. The root causes, however, remained undisclosed.
A century and a half later in 1818, the progressive gynecologist and obstetrician James Blundell from London, showed a deep interest in the potential of blood transfusion (7). His interest was not only based on the personal experience with women in labor who postpartum bled to death, but also by the scientific experiments of John Leacock from Barbados who reported on systematic animal experiments in Edinburgh and observed the need for species-specificity when transfusing blood. He recommended to apply that principle in human blood transfusions which was practiced first by James Blundell (8). They unraveled one of the root causes of the experienced lethal transfusion events—immunology. However, it took another 80 years before around the turn of the 19th century the Viennese physician, pathologist, and scientist Karl Landsteiner, discovered among his laboratory employees the presence of specific markers or antigens on the surface of red cells, genetically determined. He named them blood groups and following the alphabet distinguished the groups A, B and individuals who did not express A or B antigens and called these 0 (zero because of the absence of A and/or B marker) (9). That breakthrough unraveled the second root cause experienced in the past—blood group serology as a part of immunology for which Karl Landsteiner in 1930 was awarded a Nobel prize in physiology. To prevent these lethal events, transfusion of compatible blood (the same blood group) would be the solution to immunologically safe clinical transfusion practice in hospitals.
Up till WWI, blood transfusion has been a direct man-to-man practice, with the donor lying next to the patient and a vein-to-vein connection of rubber tubing, a cumbersome technique. The donor was tested for AB0 blood group and since 1908 (Ottenberg), blood of donor and recipient was cross matched to assure compatibility (10). In 1914, Albert Huestin from Belgium introduced a citrate solution as an anticoagulant to prevent clotting and allow storage of collected blood in glass retorts closed with a flamed cottonwool stopper (11). That brought along the need for more accurate documentation, archiving and retrievability of data, all manually. The wounded soldiers, allied and enemy, in the military field lazarettes in France during WWI were treated with blood that was flown over from England to France, which needed specific logistics to protect data from errors and mishaps. To assure survival of physiological functions of red blood cells, the sodium citrate was mixed with a glucose-saline solution and used 1 to 1 during collection of the blood. Following WWI, civil blood transfusion services were developed, supported by potential blood donors, and initially operating on a replacement stock principle. This requested an even more extended administration of the growing donor population and a unique numbering of the receptacles with collected blood, but still all manual with the risk of clerical, identification, interpretation of blood grouping outcomes, and transcription errors that could lead to serious adverse transfusion events.
WWII triggered an acceleration in the developments of the blood supply and distribution. Re-usable glass bottles with a screw cap and rubber stopper, an improved small volume anticoagulant and preservation fluid containing acidified citrate and dextrose (ACD) to allow longer storage and survival of collected blood, and the development of a cold chain principle came into effect. A substantial share of these development took place in London where Mollison, Loutit, and Young did their experiments (12). Following the principles in the dairy industry, plasma was separated from the cellular components and freeze dried in tins destined for military field use in Europe and the Pacific (13). Large scale plasma separation and fractionation to recover albumin was developed by Cohn at the Harvard University in Boston, and needed pharmaceutical manufacturing principles and large pools, for which the military industrial system of Good Manufacturing Practices (GMP) was introduced (14). GMP demands accurate documentation and traceability to handle larger numbers of data to be processed and distributed over the world. Post WWII data were stored and managed using the first-generation computers and so-called punch cards. That was the start of the development of AI.
As expected, developments accelerated in those parts of the world where infrastructure was already created to allow these and other comparable developments. This happened almost exclusively in the dominant colonial countries like the UK and France, the New World on the North American continent, and their post-WWII spheres of influence, like Japan. Blood separation technology using temperature, speed (rotations per minute), acceleration and deacceleration variables were calculated and captured in programmatic algorithms to allow different standardized programs for separation of platelet rich plasma, red cells and plasma, each at an optimal algorithm to recover as much as possible of these precious components and with a relatively small standard deviation and range—consistency of production. Two technologies developed in parallel: the robust cooled blood bank centrifuge to separate units of whole blood, and the hemapheresis equipment to collect specific cells or plasma from one single donor returning the processed blood in a continuous or discontinuous flow principle. This last technology could also be used for therapeutic purposes, e.g., plasma exchange.
Engineering guided by Tullis and Latham jr. led to the development of a simplified reusable bowl mounted in a centrifuge for the discontinuous flow separation and collection of plasma, and eventually to a prototype of the current bowl and an apheresis machine manufactured by Haemonetics Inc. in Braintree, MA, introduced in the 1960s (15). What exactly happens in the bowl was not investigated until 1992 when a Dutch mechanical engineering graduate student Hein Smit Sibinga from the Enschede Technical University in the Netherlands designed a computer imaging program and software based on AI algorithms to unravel the physics of the separation in a spinning bowl and allowing variation of the mechanical and physical parameters to optimize the separation process (16). Almost simultaneously in the same 1960s, the continuous flow centrifugation principle was developed by Eisel and Freireich at the Cancer Institute in Houston, TX, in close co-operation with Judson, a project engineer at International Business Machines Inc. (IBM) (17). This principle was designed primarily to collect granulocytes and resulted in the construction of the NCI-IBM continuous flow centrifugation blood cell separator, available for field trials in 1966. Since then, several engineering improvements and AI machine learning principles were designed to allow more pure separation of cell fractions using the physical principles of centrifugal forces.
Similarly, testing for blood groups and transfusion transmissible disease markers (hepatitis, syphilis, and HIV) developed from small numbers semi-automated into large fully automated laboratory test robots—machine readable sample ID numbers, processing, and outcome storage and printing. The principles of machine learning (ML) were born and matured at high speed. The introduction of barcoding in the food industry triggered the introduction in the blood manufacturing establishments as well as in the hospital for patient identification, preventing clerical, transcription, and identification errors using barcode ID wristbands.
With the outbreak of the HIV/AIDS pandemic in the 1980s, and the subsequently emerging transmissible agent such as the prions, West Nile, and Zika viruses, the need for rapid traceability of data (donors, processing and testing, distribution) became a high priority. The blood supply in the more advanced world was largely reorganized to meet substantial economies of scale to supply more hospitals in a larger catchment area, while being controlled by more effective oversight and Regulatory Authorities—legally independent Blood Establishments. To achieve uniformity and exchangeability between institutions and countries, and traceability, the European Union (EU) designed a regulatory structure and legislative reference setting standards of quality and safety for the collection, testing, processing, storage and distribution of human blood and blood components for all member countries to implement and operate (18). That implicated an enormous extension of data and data management without losing quality and accuracy. Similar developments took place in Australia, Canada, and Japan. It was recognized that the blood supply has two distinctly different elements—the manufacturing of blood products controlled by product liability, and the consumption in the healthcare protected by consumer rights protection. Both product liability and consumer (patient) rights protection brought awareness about the legal consequences of the entire vein-to-vein transfusion chain, with documentation (big data) as the core element for evidence and traceability.
The manufacturing establishment has two vein-to-vein interfaces: hospitals (clinical consumption, customers/patients), and community (source material, suppliers/donors) (19). Both interfaces have a market and communication function. They are elements of a larger environment and form the interconnections with the blood supply system as an integral part of the healthcare system and structure. That determines the flow of information and data, quantitatively and qualitatively. In short, the need is determined by the patient at the bedside leading to a demand, which is responded with the supply of products provided by the manufacturing or procurement establishment. The source material, human blood, comes from the community as a blood market.
The blood transfusion chain has two distinctly different operational and managerial parts (19): clinical or consumption, and collection and manufacturing of the source material. The clinical or consumption part has three major processes: (i) bedside—diagnosis, indication and ordering; (ii) laboratory/blood transfusion service—immunohematology (AB0/RhD), selection of blood component and compatibility testing; and (iii) bedside—patient identification, matching with the prepared blood component, transfusion, and observation of outcome. The collection and manufacturing of the source material, which is the human blood, has four major aspects: (i) community—awareness, donor selection and blood collection; (ii) processing—separation of the source material into its cellular components and plasma; (iii) laboratory—testing of each collected unit for blood group and Transfusion Transmissible Infection markers (TTI), product specifications; and (iv) storage and distribution—quarantine release and labelling, inventory management, cold chain for storage, and transport/distribution to the hospitals.
Each of these primary or core processes routinely generates data. The quality of these routinely generated data is of paramount importance and needs therefore standardization and commitment to ensure consistency of process operations and data generation. Quality and consistency of data determine the outcome of AI processes whether machine or deep learning. The processes have subprocesses and procedures to transform an input into an output. The primary processes are supported by secondary or supportive processes such as human resource, finance and administration, quality management, education, purchase of consumables and equipment, information, and communication technology (ICT), public awareness campaigning, emergency preparedness, waste management, maintenance and repair, domestic services, etc. The supportive processes are elementary to the primary process operations and management, and decisive for the implementation of the strategies initiated by the third layer—steering processes. The steering processes are based on the mission and vision statements or policies of the healthcare institution and blood establishment. Key in this chain or flow is data management—interrelated and interconnected documentation and archiving to achieve consistency, statistical evaluation of outcomes, benchmarking and prediction of volume, and changes in each of these processes and procedures (20). None of these processes is stand-alone, they all are interconnected, forming a complex network of data and information from patient treatment outcome to community awareness, motivation, and mobilization of potential donors. Today, data and documents are mostly stored in hardware and operated through hardware networks with appropriate entrance restrictions (e.g., donor confidentiality, test results), for unauthorized access.
Over the past decades, the most significant change has been the acceptance of Transfusion Medicine (TM) as a medical subdiscipline focusing on the patient to determine appropriate therapy. In the early years of the 21st century, greater emphasis was placed on patient blood management (PBM), hemovigilance, cost recovery, and integration in the healthcare system (21–25). During this time, TM refocused on the patient through restrictive therapeutic guidelines to ensure that every blood transfusion is evidence-based prescribed and that alternative therapies are considered to reduce risk to the patient. As a global result, blood usage, especially red cell transfusion, has decreased over the last two decades, further effected by the current COVID-19 pandemic. While integrated healthcare delivery evolves, efficient suppliers, including blood establishments, must become LEAN and practice continuous improvement (26). The prime goal is efficiency embraced stewardship. A blood availability and safety digital footprint, driving efficiency, safety, patient satisfaction, and lower costs might be the road to the new Rome and contribute to achieving several of the 2016–2030 UN Sustainable Development Goals (SDG) as well as the 2030 UN Universal Health Coverage (UHC) goal (27, 28).
A vein-to-vein digital footprint within TM requires the presence of operational and well-documented primary and supportive processes that may be facilitated by AI (29). This includes donor motivation, call-up, and selection as well as data obtained during the collection process. Checks and balances among expected needs and supplies as well as automation and robotic elements for greater efficiency are envisioned in blood separation, quality testing using robotics, quarantine release and final labelling, storage, and distribution (cold chain logistics). Management of the blood product, its labelling, and the blood samples for testing could be improved through Radio Frequency Identification (RFID). Automating current manual processes with minimal manual touch is envisioned with RFID, e.g., the digital footprint using automation and RFID to direct quality control test tubes for centrifugation or collected whole blood delivered to a centrifuge and blood separator. The digital footprint could even predict and streamline blood product ordering and establish logistics for inventory and delivery efficiency, ensuring the right blood at the right time to the correct location, all while reducing technical manpower and human errors (Figure 2).
Figure 2. Flow of a potential digital footprint of the integration of RFID into the vein-to-vein transfusion chain (J. Holmberg, 2014; ref. 29).
The implementation of digital footprint to achieve universal quality data collection and management in the hospital could reduce or prevent errors when completing a physician’s order using AI and RFID for specimen tracking, identification, and labelling. Also, it could help avoid ‘wrong blood in tube’, inappropriate transport (cold chain) and crossmatch errors, and identify bedside problems (‘is this the right product for the right patient at the right time’?). Digital transfusion potential could create significant financial savings through an integrated vein-to-vein system, a well-functioning clinical interface.
On the blood establishment (manufacturing) side, a digital footprint, either integrated or separate from the hospital information system (HIS), must be transparent in quality data collection and sharing, including supply chain management within the blood establishment through a well-developed supplier-customer relationship. Transparent inventory management has the potential to move the blood supply within an integrated delivery network (IDN) to a vendor-managed inventory within the hospital. Transparency is critical to building the confidence of an IDN that will also meet the needs for unexpected emergencies (29). Evidently cybersecurity as a major challenge, should be a top priority for the protection of patient/donor privacy information and identification.
Factually, there is a growing evidence of AI application in TM, largely machine learning (equipment) but also deep learning applications (donor mobilization, stock management, cross matching, transfusion prediction in surgical interventions and prescription). In the advanced world, machine learning has been introduced in the clinical decision-making for transfusion, using data sets and stochastic dynamic programming (30, 31) to predict the need (32–34). Artificial neural networks (ANN) have been shown to provide an acceptably reliable mechanism to evaluate preoperatively the potential for peri-operative transfusion across a wide range of surgical interventions, reducing unnecessary cross-matching and out of stock of blood components for a considerable amount of time during the day (35). Pretransfusion compatibility testing is also entering AI through ML, with the introduction in the late 20th century of computerized or electronic crossmatching, thus saving time, reagents, and labor (35, 36). This is considered, and largely accepted, as safe as the immediate spin crossmatch, provided there are no clinically significant alloantibodies present and there is no AB0 discrepancy. In almost all manufacturing processes of blood, robotics has been introduced, e.g., automated mixing/weighing machines in blood collection, blood grouping and infectious disease marker testing robots using algorithms, advanced blood cell separators and centrifuges, computerized labelling machines, and machine learning-based cold chain management.
It is a fiction to believe that AI would replace the working force of TM. However, AI serves as a supportive tool to improve on quality and efficacy of data handling and management through the principles of digital foot printing, directing the blood supply into an even more effective and safe future.
TM has come a long way from the discovery of the AB0 blood group antigens to supportive introduction of AI (machine and deep learning). In a changing and developing healthcare environment, in Low- and Medium-Income Countries (LMICs) with an integrated blood supply and transfusion system, digital foot printing might become an important structural element. Such footprint should involve both the healthcare provider (prediction and prescription) and the supplier (procurement chain), including the community that should function in a stabilized national governance and socio-economic climate and environment.
Conflict of Interest: The author declares no potential conflict of interest with respect to research, authorship and/or publication of this chapter.
Copyright and Permission Statement: The author confirms that the materials included in this chapter do not violate copyright laws. Where relevant, appropriate permissions have been obtained from the original copyright holder(s), and all original sources have been appropriately acknowledged or referenced.