In superior chip manufacturing, Optical Proximity Correction (OPC) is a necessary approach that tweaks masks designs to compensate for distortions attributable to diffraction and course of results. As function sizes plunge into the nanometer regime, conventional OPC workflows face mounting challenges. Advanced layouts and ever-tighter course of tolerances drive exploding computational necessities — a single state-of-the-art masks set can demand on the order of 30 million CPU hours to compute. This compute burden, which scales with each smaller options and bigger chip areas, has made OPC and lithography simulation a number of the most computationally intensive steps in semiconductor fabrication.
Confronted with these calls for, the {industry} is exploring a paradigm shift: leveraging data-driven strategies (AI, machine studying, and generative fashions) to speed up and improve OPC and lithography course of simulation. Current algorithmic breakthroughs in machine studying have broadened the scope of computational lithography purposes — shifting past earlier makes use of like hotspot detection towards tackling full OPC and even inverse lithography (computing masks options immediately from desired patterns). In easy phrases, as a substitute of solely counting on physics-based fashions and iterative trial-and-error, engineers are actually “instructing” machines to foretell lithographic outcomes and masks corrections utilizing huge knowledge and AI.
On this article I assessment the newest developments within the data-driven approaches to OPC and course of simulation. I’ll cowl how AI/ML and generative AI are being utilized, real-world use instances and developments, and what all of it means for the way forward for chip manufacturing, all in an accessible however technically strong manner.
Conventional OPC is grounded in physics and empirical statistical modeling. Photolithography simulators use detailed physics (optical diffraction fashions, resist chemistry fashions, and so on.), calibrated by becoming to wafer measurements. In apply, OPC instruments iteratively regulate masks geometries (typically breaking edges into segments) and use these calibrated fashions to foretell printing — refining the masks till the simulated wafer picture matches the goal inside tolerance. This calibration is basically a regression drawback: dozens and even tons of of mannequin parameters (optical aberrations, resist diffusion lengths, etch biases, and so on.) are tuned to attenuate errors towards empirical knowledge. Through the years, OPC fashions have grown to incorporate increasingly more results to enhance accuracy. Nonetheless, this additionally made mannequin calibration a heavy course of. Engineers have employed strategies like genetic algorithms and gradient descent to seek for the very best mannequin parameters, typically breaking the issue into sub-steps to handle complexity. Even so, reaching excessive accuracy throughout all structure patterns might be arduous, and every new course of node or change (resembling new supplies or a unique lithography instrument) might require re-calibration from scratch.
Enter data-driven approaches: As an alternative of explicitly programming all physics, what if we let a machine studying mannequin immediately study the pattern-to-pattern distortions from knowledge? The promise of AI in OPC is to bypass a number of the brute-force physics simulation by predicting outcomes with a skilled mannequin. This might dramatically pace up the OPC loop and even discover higher options {that a} conventional algorithm may miss. Machine studying fashions can digest huge numbers of structure examples and seize complicated nonlinear relationships that may be arduous to mannequin with parametric equations alone. Crucially, this doesn’t imply physics is thrown out the window. In reality, hybrid approaches are rising. A notable technique is to make use of ML to get an preliminary correction that’s near optimum, then end with standard physics-based OPC for fine-tuning. This leverages the very best of each worlds — AI for pace, physics for final accuracy. This theme is seen in a number of of the use instances mentioned beneath.
Current years have seen a flurry of analysis making use of machine studying to OPC. Early makes an attempt centered on utilizing ML to predict OPC changes extra effectively. For instance, engineers have skilled fashions to output the required masks tweaks (e.g. the displacements of masks phase edges) for a given goal sample, changing some iterative steps of model-based OPC with a one-shot prediction. The thought is that after coaching on a big dataset of layouts and their recognized OPC options, the ML mannequin can generalize to new layouts — primarily mimicking what a seasoned lithography engineer or OPC algorithm would do, however in a tiny fraction of the time.
A key problem for such ML-OPC approaches is knowledge: you want a consultant set of structure clips with appropriate OPC solutions to coach the mannequin. Researchers have explored efficient knowledge sampling strategies to make sure the coaching covers all kinds of 2D patterns, since trendy layouts have huge range. If the mannequin solely sees a slim sample vary, it might falter on new designs — an apparent concern for one thing as important as masks synthesis. Regardless of these challenges, outcomes have been encouraging. One examine demonstrated a machine studying methodology that predicts OPC masks edge actions with accuracy near conventional strategies, whereas considerably lowering turnaround time (TAT) for full-chip OPC.
Maybe probably the most thrilling advances come from generative AI strategies — algorithms that don’t simply make sure/no predictions, however can create new patterns. Within the context of OPC, generative fashions are getting used to immediately generate masks corrections or help options as in the event that they had been “designing” the masks alongside the engineer. A primary instance is using Generative Adversarial Networks (GANs) for OPC.
One pioneering work, GAN-OPC, reframed OPC as an image-to-image translation drawback: given the “desired” sample (design intent), generate the “corrected” masks sample. A GAN setup entails two neural networks — a generator and a discriminator. The generator learns to output masks patterns that, when fed right into a lithography simulator, would print the goal shapes. The discriminator acts as a critic, distinguishing the generator’s output from the recognized appropriate masks throughout coaching, thereby guiding the generator to enhance. In GAN-OPC, the generator was structured like an auto-encoder community that ingests the goal structure picture and outputs a masks picture. Over coaching, it successfully learns the OPC conduct, i.e. find out how to modify shapes to counteract systematic distortions. Impressively, this generative method not solely sped up convergence of the extraordinarily compute-heavy inverse lithography resolution, however even improved some facets of sample high quality in comparison with standard ILT strategies. In a single reported consequence, the GAN-OPC methodology achieved a couple of 2× discount in total OPC runtime whereas barely lowering errors like edge placement error (EPE) and variability banding, versus conventional ILT-based OPC flows. In different phrases, the machine discovered to discover a masks resolution quicker and bought a greater consequence — an enormous win for generative studying in OPC.
Constructing on this, researchers have prolonged generative frameworks to associated RET issues. A notable case is sub-resolution help options (SRAFs) — these tiny assistive shapes added to masks to enhance lithography depth of focus, that are normally positioned by difficult guidelines or iterative fashions. A staff led by Alawieh et al. proposed GAN-SRAF, utilizing a conditional GAN to robotically generate SRAF patterns for any given structure. They devised a intelligent encoding of structure enter so the GAN may perceive the place help options are wanted. The payoff was dramatic: the GAN-based method achieved as much as 14× quicker runtime than a earlier ML-based SRAF methodology and about 144× quicker than standard model-based SRAF insertion, all whereas sustaining comparable lithographic high quality. This can be a placing instance of generative AI providing orders-of-magnitude speedups in OPC/RET duties that traditionally slowed down computational lithography.
Different generative strategies embody utilizing cycle-consistent GANs and reinforcement studying to deal with situations the place acquiring paired coaching knowledge is tough. For example, one examine used an unpaired studying method (through CycleGAN) to insert help options by studying from remaining wafer pictures, successfully deriving the place assists ought to go with out one-to-one coaching examples. One other progressive course has been to make use of reinforcement studying for OPC — treating the masks correction course of like a recreation the place the “agent” (ML mannequin) tries sequence of edge strikes to maximise a reward (print accuracy). A 2025 examine reviews utilizing deep reinforcement studying to robotically tune OPC mannequin parameters, discovering options corresponding to human-tuned fashions with potential to scale back iteration time. All these approaches share a standard thread: they use data-driven optimization to both information or substitute elements of the OPC course of that had been historically hand-crafted or brute-force.
OPC can’t exist with out lithography simulation — it’s the suggestions mechanism that tells us if a given masks will print appropriately. Right here too, AI is making waves. Machine learning-based lithography simulators act as surrogate fashions for the physics, predicting the printed wafer sample (or important metrics like contours and CD values) a lot quicker than rigorous simulation. The quintessential instance is LithoGAN, which demonstrated an end-to-end neural community mannequin of the lithography course of. LithoGAN makes use of a conditional GAN to output a simulated resist picture for a given masks enter, successfully studying the complicated aerial picture formation and resist processing from knowledge. As an alternative of fixing Maxwell’s equations and reaction-diffusion equations every time, as soon as LithoGAN is skilled, it might probably predict how a masks will print in milliseconds. In reality, this method yielded as excessive as an 1800× speedup in lithography simulation in comparison with conventional strategies, with solely minor loss in accuracy. Such a surrogate might be plugged into OPC verification or hotspot detection flows to drastically scale back turn-around-time.
It’s price noting that these ML simulators deal with the structure like a picture — a theme throughout many data-driven lithography efforts. By representing masks and wafers as pixelated pictures, one can borrow highly effective laptop imaginative and prescient fashions. Convolutional neural networks (CNNs) and GANs excel at image-to-image duties, and lithography might be considered as precisely that: remodeling one “picture” (the masks structure) into one other (the printed sample). The success of LithoGAN and related fashions underscores how generative studying can function a quick emulator or “digital lithography instrument.”
Past full pictures, different ML fashions predict particular outcomes — e.g., will a given structure snippet be a yield-critical hotspot? This was truly one of many first makes use of of machine studying in lithography. As an alternative of exhaustive simulation, classify patterns prone to print poorly to allow them to be fastened prematurely. Researchers have utilized deep neural nets for hotspot detection for over a decade, utilizing each supervised studying and autoencoder-based function extraction to flag problematic geometries. These early successes constructed confidence that ML can deal with the complexity of actual designs. Now, with generative fashions and superior neural nets, the frontier has moved to predicting contours and even producing masks fixes, not simply figuring out bother areas.
So, how shut are these AI-driven strategies to being utilized in manufacturing fabs and EDA instruments? The hole between analysis and {industry} apply is narrowing quickly. In reality, GPU-accelerated computing mixed with AI is already being rolled out to deal with lithography challenges at forefront nodes. In March 2024, NVIDIA introduced that TSMC and Synopsys are adopting its cuLitho platform — a GPU-accelerated computational lithography library — in manufacturing, citing speed-ups of 40× to 60× on OPC and associated duties. Alongside the uncooked compute enhance, NVIDIA revealed that generative AI algorithms have been built-in to additional improve OPC on this platform. Particularly, the generative AI part can create a near-optimal inverse masks resolution to account for diffraction, which is then refined by conventional rigorous strategies — successfully slicing the general OPC runtime in half past the GPU {hardware} beneficial properties. This can be a landmark instance of a hybrid AI method (very very like the GAN-OPC thought) being deployed by a number one foundry: an ML mannequin produces a superb first-cut masks, and the standard OPC engine cleans up any residual errors, assuring remaining accuracy. The result’s quicker masks synthesis with out sacrificing belief within the remaining output.
Regardless of such progress, it’s essential to acknowledge that full adoption of AI/ML in OPC continues to be in its early phases. As of now, most manufacturing flows use AI as an assistive instrument relatively than a wholesale substitute for mature OPC algorithms. Business consultants have identified key causes for warning: lack of interpretability of black-box fashions, considerations about how nicely an ML mannequin skilled on previous knowledge will extrapolate to new designs, and the excessive stakes of any error in manufacturing.
A masks that’s 99% appropriate is not adequate if that 1% error may break pricey wafers.
Many present options due to this fact, use AI in a constrained manner: for instance, to recommend an OPC correction that’s then verified or adjusted by physics-based simulation, or to deal with elements of the method which might be much less important. One business method permits ML to perturb OPC mannequin parameters solely inside a good margin (e.g. ±0.5 nm) throughout mannequin calibration, making certain it doesn’t stray past known-safe bounds.
One other development addressing the belief subject is the event of physics-informed ML fashions. Slightly than a pure black-box, these fashions embed bodily legal guidelines or recognized invariances into the neural community’s construction. Imec, for example, has mentioned “physics-based machine studying” for lithography the place the mannequin inherently respects bodily constraints (like vitality conservation or resist diffusion conduct). Such fashions are usually extra interpretable and require much less coaching knowledge, as a result of they don’t need to re-learn fundamentals that we already know. By constructing some physics area information into the AI, engineers intention to get the advantages of data-driven pace and adaptability whereas mitigating the chance of non-physical predictions. This method may speed up acceptance of ML for OPC — think about a neural community that not solely predicts a masks sample, however can clarify which optical interference impact induced a given correction, as a result of that understanding is constructed into its structure.
The {industry} developments clearly point out a rising synergy between huge knowledge, AI, and computational lithography. The truth that main EDA distributors and foundries are investing in these applied sciences (typically in partnership, as seen with the TSMC-Synopsys-NVIDIA collaboration) exhibits a recognition that conventional strategies alone might not maintain Moore’s Regulation or its successors. On the similar time, the conservatism of the semiconductor manufacturing world means new strategies should show themselves totally.
We are able to count on a gradual ramp-up: first utilizing AI to reinforce present OPC flows (rushing up what was handbook or semi-manual tuning), then progressively trusting AI for bigger parts of the duty as confidence builds. Given the early outcomes — e.g. ML-OPC frameworks efficiently mimicking standard OPC corrections throughout full chip layouts — there’s actual optimism that data-driven OPC can transition from educational demos to on a regular basis manufacturing instruments within the coming few tech nodes.
The convergence of AI and semiconductor lithography has unlocked new potentialities in how we appropriate and mannequin the printing of nanometer-scale circuits. Information-driven approaches to OPC and course of simulation are now not simply theoretical concepts; they’ve demonstrated compelling benefits in analysis settings and are starting to affect business apply. We see neural networks producing masks patterns, one thing virtually unthinkable a decade in the past. We see surrogate fashions delivering simulation ends in milliseconds with solely a slight accuracy trade-off, an 1800× pace leap that may radically shorten design cycles. And we see hybrid workflows the place AI and physics work hand-in-hand — a generative AI suggests an answer and a physics-based engine perfects it — combining reliability with revolutionary effectivity.
For {industry} professionals, the takeaway is twofold. First, these applied sciences promise to take care of the tempo of innovation as conventional OPC strategies pressure below complexity; they’re instruments to lengthen optical lithography and scale back turnaround time when brute-force computation would in any other case be a bottleneck. Second, implementing them would require considerate integration — from curating the proper coaching knowledge and options, to imposing crucial constraints, to rethinking verification methodologies to accommodate a discovered mannequin within the loop.
The position of the lithography engineer is evolving: tomorrow’s OPC professional may additionally have to be a knowledge scientist, guiding and decoding an AI that’s doing the heavy lifting.
The tempo of progress is spectacular. In simply the previous few years, we went from utilizing ML primarily for figuring out drawback areas (hotspots) to utilizing it to really repair these issues (proposing masks corrections). Generative AI, particularly, has introduced an imaginative leap — it’s giving us masks options that we didn’t explicitly program, a few of which have confirmed to be nearly as good as or higher than human-tuned ones. As computational sources like GPUs and cloud clusters change into extra accessible, and as extra lithography knowledge is collected from fabs (offering gas to coach ML fashions), we will count on these AI-driven strategies to solely get higher.
Information-driven OPC and course of simulation signify a robust augmentation to the semiconductor manufacturing toolkit. They uphold the steadiness between sustaining rigorous accuracy and pushing the envelope on effectivity. The tone of current {industry} bulletins is optimistic — phrases like “opening new frontiers for semiconductor scaling” and “revolutionizing the fabrication course of” are getting used with out hyperbole. There’s a sense that we’re on the daybreak of a brand new period the place silicon manufacturing meets synthetic intelligence in a deeply impactful manner. By embracing these developments with each enthusiasm and due diligence, the {industry} can proceed to print ever smaller, extra complicated chips — quicker, smarter, and with a bit assist from machine studying.
Whereas AI-assisted OPC and data-driven lithography simulation have proven important promise, full-scale adoption in semiconductor manufacturing stays restricted. There are a number of technical, operational, and industry-specific challenges that have to be overcome earlier than AI turns into a dominant drive in OPC workflows.
1. Belief and Interpretability: Can AI Be Trusted for Excessive-Stakes Manufacturing?
One of many greatest obstacles to adoption is the black-box nature of AI fashions. Conventional OPC strategies are rooted in physics-based fashions, the place engineers can analyze and debug each step of the correction course of. In distinction, deep studying fashions — particularly generative AI (e.g., GAN-OPC, LithoGAN) — function as black containers, making it obscure why a sure correction was utilized.
This lack of transparency raises important considerations:
- Debugging Points: If an AI-generated OPC resolution produces an error, engineers might battle to pinpoint the foundation trigger and make handbook changes.
- Regulatory and Certification Obstacles: Semiconductor fabs function in a extremely regulated atmosphere the place each manufacturing step have to be validated and traceable. AI’s lack of interpretability complicates qualification for manufacturing use.
- Reluctance to Depend on AI for Mission-Crucial Processes: Any AI-generated OPC sample that deviates unexpectedly from bodily expectations may lead to yield loss, costly masks re-spins, and even catastrophic chip failures.
To handle this, the {industry} is exploring hybrid AI+ physics approaches, the place AI assists in OPC correction however physics-based verification stays a core a part of the move. This ensures AI-generated options are each environment friendly and bodily correct.
2. Information Challenges: Inadequate and Biased Coaching Information
AI fashions require giant and various datasets to generalize nicely throughout totally different chip designs and course of circumstances. Nonetheless, gathering high-quality coaching knowledge for AI-driven OPC is a serious problem.
- Restricted Excessive-Decision OPC Datasets: AI fashions want huge quantities of OPC-corrected masks layouts and their corresponding wafer outcomes. Nonetheless, OPC knowledge is commonly proprietary and scattered throughout totally different fabs and foundries, making it tough to compile complete coaching units.
- Bias in Coaching Information: If the AI mannequin is skilled totally on layouts from one foundry, one know-how node, or one lithography instrument, it might not generalize nicely to new manufacturing environments. This overfitting drawback means AI fashions may carry out nicely in lab settings however fail in real-world manufacturing situations.
- Problem in Accumulating “Failure Information”: AI fashions typically profit from studying failure instances (e.g., catastrophic masks print failures). Nonetheless, semiconductor producers sometimes don’t produce or retailer failed designs, making it arduous to show AI to anticipate and forestall failure modes.
A possible resolution to this drawback is using artificial knowledge era — for instance, physics-informed AI fashions that create lifelike coaching knowledge, together with failure instances. This method is being explored to complement the restricted quantity of precise OPC coaching knowledge.
3. Mannequin Generalization: Can AI Adapt to New Course of Nodes?
A serious sensible concern for semiconductor producers is whether or not an AI-assisted OPC resolution skilled for one course of node (e.g., 5nm) can work for the following node (e.g., 3nm or 2nm). In contrast to conventional physics-based OPC, which might be fine-tuned and scaled utilizing bodily ideas, AI fashions are sometimes “locked” to the dataset they had been skilled on.
Challenges embody:
- Course of Variability: Even inside the similar node, totally different foundries (e.g., TSMC vs. Intel vs. Samsung) have distinctive lithography stacks, requiring separate AI fashions for every course of.
- Problem in Switch Studying: Whereas switch studying is frequent on the whole AI purposes, adapting an OPC AI mannequin from one course of node to a different is non-trivial and infrequently requires important re-training.
- Want for Steady Re-Coaching: AI fashions may require frequent updates to stay efficient as new supplies, lithography strategies, and EUV masks configurations are launched.
Some researchers are engaged on self-learning AI-OPC frameworks, the place fashions can regularly refine themselves based mostly on incoming fab knowledge. Nonetheless, these approaches are nonetheless in early improvement.
4. Compute Assets and Deployment Challenges
Though AI has the potential to hurry up OPC, coaching and deploying AI fashions at scale stays computationally costly. Semiconductor fabs take care of huge chip layouts, which means AI-based OPC should course of terabytes of information effectively.
Key challenges:
- Coaching AI Fashions Is Computationally Costly: Coaching a deep studying mannequin for OPC can require hundreds of GPU-hours. Whereas firms like NVIDIA are accelerating computational lithography with GPU-based options (e.g., cuLitho), the {hardware} funding continues to be substantial.
- Inference Pace vs. Accuracy Commerce-off: AI-generated OPC fashions have to be as quick as attainable whereas sustaining excessive precision. A mannequin that’s 10× quicker however 5% much less correct won’t be acceptable for high-yield semiconductor manufacturing.
- Integration Into Current EDA Flows: AI-based OPC options should seamlessly combine with present EDA toolchains (e.g., Synopsys, Cadence, Mentor). This requires customized instrument improvement, which will increase adoption prices and complexity.
One promising course is edge AI for OPC, the place smaller, specialised AI fashions carry out inference immediately inside the fab atmosphere, lowering the necessity for cloud-based compute sources.
5. Business Conservatism: Danger Aversion in Excessive-Price Manufacturing
The semiconductor {industry} is deeply risk-averse because of the excessive prices related to manufacturing defects. A single defect in OPC may result in a full-chip masks re-spin, costing hundreds of thousands of {dollars} and delaying product releases.
Because of this:
- Engineers are inclined to want “confirmed” strategies over experimental AI approaches. AI should exhibit near-perfect reliability earlier than changing conventional OPC strategies.
- AI-generated OPC should bear rigorous validation cycles, which provides time and price to adoption.
- Early AI-based OPC options are getting used primarily in low-risk purposes (e.g., aiding engineers relatively than changing conventional OPC fully).
For this reason many present deployments of AI in OPC are hybrid fashions, the place AI suggests corrections, however conventional physics-based OPC finally verifies and applies them.
Regardless of these roadblocks, AI is making regular inroads into OPC and lithography simulation. Corporations like TSMC, Synopsys, and NVIDIA are actively investing in AI-driven OPC, and early outcomes recommend that hybrid AI+ physics approaches are probably the most promising path ahead.
Key methods to beat adoption obstacles embody:
- Bettering AI Interpretability — Growing physics-informed AI and hybrid AI+ physics approaches to realize belief in AI-generated OPC.
- Enhancing Mannequin Generalization — Utilizing switch studying and continuous studying to make AI-OPC options adaptable throughout a number of course of nodes.
- Creating Higher Coaching Information — Leveraging artificial knowledge era to compensate for the shortage of various, real-world OPC datasets.
- Optimizing Compute Effectivity — Exploring edge AI and accelerated GPU computing to make AI-OPC options sensible at scale.
- Proving AI’s Reliability — Demonstrating AI’s accuracy by way of intensive validation and gradual integration into manufacturing workflows.
AI-driven OPC is unlikely to fully substitute conventional strategies within the close to future, however it’s going to increase and speed up present workflows. The important thing to success lies in a cautious but forward-thinking method, the place AI is built-in incrementally and validated rigorously. As confidence in these strategies grows, we will count on AI to play an more and more central position in computational lithography — serving to fabs print ever-smaller, extra complicated chips with larger effectivity.
- Yang, T. et al. “GAN-OPC: Masks optimization with lithography-guided generative adversarial nets.” DAC 2018.
- Heejun Lee et al., “Thread Scheduling for GPU-based OPC simulation on multi-thread”, Convention Proceedings Article, Optical Microlithography XXXI, Quantity: 10587, Pages: 204–210. Mar 20, 2018.
- Ye, W. et al. “LithoGAN: Finish-to-end lithography modeling with generative adversarial networks.” DAC 2019.
- Xue, S. et al. “Machine studying SRAF insertion for masks optimization,” Proc. SPIE 11187, 2019.
- Abdelghany, A. et al. “Implementing machine studying OPC on product layouts,” SPIE Superior Lithography 11328, 2020.
- Liu, Q. et al. “Adversarial assault on deep learning-based hotspot detection,” Proc. SPIE 11517, 2020.
- Alawieh, M. B. et al. “GAN-SRAF: Sub-Decision Help Function Era Utilizing Generative Adversarial Networks.” IEEE TCAD 40(11), 2342–2355 (2021).
- Ciou, W. et al. “Machine studying optical proximity correction with generative adversarial networks.” J. Micro/Nanopattern. Mater. Metrol. 21(4), 041606 (2022).
- H. -C. Shao, C. -W. Lin and S. -Y. Fang, “Information-Pushed Approaches for Course of Simulation and Optical Proximity Correction,” 2023 twenty eighth Asia and South Pacific Design Automation Convention (ASP-DAC), Tokyo, Japan, 2023, pp. 721–726.
- Imec (Y. Sherazi et al.), “Physics-based machine studying fashions for lithography,” Imec Know-how Discussion board (2023).
- Habib, M. S. et al. “Novel Finish-to-Finish Manufacturing-Prepared Machine Studying Move for Nanolithography Modeling and Correction.” arXiv:2401.02536 (Jan 2024).
- Huang, J. (NVIDIA CEO). “Computational lithography… accelerated computing and generative AI to open new frontiers,” NVIDIA Press Launch (Mar 2024).
- H. Zhu, X. Jiang, D. Shu, X. Cheng, B. Hou and H. You, “A Assessment of DNN and GPU in Optical Proximity Correction,” 2024 2nd Worldwide Symposium of Electronics Design Automation (ISEDA), Xi’an, China, 2024, pp. 703–709, doi: 10.1109/ISEDA62518.2024.10617556.
- Luigi Capodieci, Remodeling the promise of generative AI into value-added purposes for bodily design structure analytics, masks knowledge synthesis and lithography simulation, SPIE, Superior Lithography Convention, 2025.
- “Optical Proximity Correction within the manufacturing of Built-in Circuits- Half 1”, J Giri, (link)