Bibliography

[1]   S.-I. Amari, “Mathematical Foundations of Neurocomputing,” Proc. IEEE, Vol. 78, pp. 1443-1463, Sep. 1990.

[2]   J. A. Anderson and E. Rosenfeld, Eds., Neurocomputing: Foundations of Research. Cambridge, MA: MIT Press, 1988.

[3]   G. Berthiau, F. Durbin, J. Haussy and P. Siarry, “An Association of Simulated Annealing and Electrical Simulator SPICE-PAC for Learning of Analog Neural Networks,” Proc. EDAC-1993, pp. 254-259

[4]   E. K. Blum and L. K. Li, “Approximation Theory and Feedforward Networks,” Neural Networks, Vol. 4, pp. 511-515, 1991.

[5]   G. K. Boray and M. D. Srinath, “Conjugate Gradient Techniques for Adaptive Filtering,” IEEE Trans. Circuits Syst.-I, Vol. 39, pp. 1-10, Jan. 1992.

[6]   R. K. Brayton, G. D. Hachtel, C. T. McMullen and A. L. Sangiovanni-Vincentelli, Logic Minimization Algorithms for VLSI Synthesis. Kluwer Academic Publishers, 1984.

[7]   J. J. Buckley and Y. Hayashi, “Fuzzy input-output controllers are universal approximators,” Fuzzy Sets and Systems, Vol. 58, pp. 273-278, Sep. 1993.

[8]   G. Casinovi and A. Sangiovanni-Vincentelli, “A Macromodeling Algorithm for Analog Circuits,” IEEE Trans. CAD, Vol. 10, pp. 150-160, Feb. 1991.

[9]   L. O. Chua and P.-M. Lin, Computer-Aided Analysis of Electronic Circuits. Prentice-Hall, 1975.

[10]   L. O. Chua, C. A. Desoer and E. S. Kuh, Linear and Nonlinear Circuits. McGraw-Hill, 1987.

[11]   W. M. Coughran, E. Grosse and D. J. Rose, “Variation Diminishing Splines in Simulation,” SIAM J. Sci. Stat. Comput., vol. 7, pp. 696-705, Apr. 1986.

[12]   J. J. Ebers and J. L. Moll, “Large-Signal Behaviour of Junction Transistors,” Proc. I.R.E., vol. 42, pp. 1761-1772, Dec. 1954.

[13]   Pstar User Guide, version 1.10, Internal Philips document from Philips Electronic Design & Tools, Analogue Simulation Support Centre, Jan. 1992.

[14]   Pstar Reference Manual, version 1.10, Internal Philips document from Philips Electronic Design & Tools, Analogue Simulation Support Centre, Apr. 1992.

[15]   F. Goodenough, “Mixed-Signal Simulation Searches for Answers,” Electronic Design, pp. 37-50, Nov. 12, 1992.

[16]   R. Fletcher, Practical Methods of Optimization. Vols. 1 and 2, Wiley & Sons, 1980.

[17]   W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes in C., Cambridge University Press, 1992.

[18]   P. Friedel and D. Zwierski, Introduction to Neural Networks. (Introduction aux Reséaux de Neurones.) LEP Technical Report C 91 503, December 1991.

[19]   K.-I. Funahashi, “On the Approximate Realization of Continuous Mappings by Neural Networks,” Neural Networks, Vol. 2, pp. 183-192, 1989.

[20]   K.-I. Funahashi and Y. Nakamura, “Approximation of Dynamical Systems by Continuous Time Recurrent Neural Networks,” Neural Networks, Vol. 6, pp. 801-806, 1993.

[21]   H. C. de Graaf and F. M. Klaassen, Compact Transistor Modelling for Circuit Design. Springer-Verlag, 1990.

[22]   D. Hammerstrom, “Neural networks at work,” IEEE Spectrum, pp. 26-32, June 1993.

[23]   K. Hornik, M. Stinchcombe and H. White, “Multilayer Feedforward Networks are Universal Approximators,” Neural Networks, Vol. 2, pp. 359-366, 1989.

[24]   K. Hornik, “Approximation Capabilities of Multilayer Feedforward Networks,” Neural Networks, Vol. 4, pp. 251-257, 1991.

[25]   D. R. Hush and B. G. Horne, “Progress in Supervised Neural Networks,” IEEE Sign. Proc. Mag., pp. 8-39, Jan. 1993.

[26]   Y. Ito, “Approximation of Functions on a Compact Set by Finite Sums of a Sigmoid Function Without Scaling,” Neural Networks, Vol. 4, pp. 817-826, 1991.

[27]   Y. Ito, “Approximation of Continuous Functions on Rd by Linear Combinations of Shifted Rotations of a Sigmoid Function With and Without Scaling,” Neural Networks, Vol. 5, pp. 105-115, 1992.

[28]   J.-S. R. Jang, “Self-Learning Fuzzy Controllers Based on Temporal Back Propagation,” IEEE Trans. Neural Networks, Vol. 3, pp. 714-723, Sep. 1992.

[29]   D. R. Kincaid and E. W. Cheney, Numerical Analysis: Mathematics of Scientific Computing. Books/Cole Publishing Company, 1991.

[30]   D. Kleinfeld, “Sequential state generation by model neural networks,” Proc. Natl. Acad. Sci. USA, Vol. 83, pp. 9469-9473, 1986.

[31]   G. J. Klir, Introduction to the methodology of switching circuits. Van Nostrand Company, 1972.

[32]   B. Kosko, Neural Networks and Fuzzy Systems. Prentice-Hall, 1992.

[33]   V. Y. Kreinovich, “Arbitrary Nonlinearity Suffices to Represent All Functions by Neural Networks: A Theorem,” Neural Networks, Vol. 4, pp. 381-383, 1991.

[34]   M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, “Multilayer Feedforward Networks With a Nonpolynomial Activation Function Can Approximate Any Function,” Neural Networks, Vol. 6, pp. 861-867, 1993.

[35]   Ph. Lindorfer and C. Bulucea, “Modeling of VLSI MOSFET Characteristics Using Neural Networks,” Proc. of SISDEP 5, Sep. 1993, pp. 33-36.

[36]   R. P. Lippmann, “An Introduction to Computing with Neural Nets,” IEEE ASSP Mag., pp. 4-22, Apr. 1987.

[37]   C. A. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989.

[38]   P. B. L. Meijer, “Table Models for Device Modelling,” Proc. Int. Symp. on Circuits and Syst., June 1988, Espoo, Finland, pp. 2593-2596.

[39]   P. B. L. Meijer, “Fast and Smooth Highly Nonlinear Table Models for Device Modeling,” IEEE Trans. Circuits Syst., Vol. 37, pp. 335-346, Mar. 1990.

[40]   K. S. Narendra, K. Parthasarathy, “Gradient Methods for the Optimization of Dynamical Systems Containing Neural Networks,” IEEE Trans. Neural Networks, Vol. 2, pp. 252-262, Mar. 1991.

[41]   O. Nerrand, P. Roussel-Ragot, L. Personnaz and G. Dreyfus, “Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms,” Neural Computation, Vol. 5, pp. 165-199, Mar. 1993.

[42]   R. Hecht-Nielsen, “Nearest matched filter classification of spatio-temporal patterns,” Applied Optics, Vol. 26, pp. 1892-1899, May 1987.

[43]   P. Ojala, J, Saarinen, P. Elo and K. Kaski, “Novel technology independent neural network approach on device modelling interface,” IEE Proc. - Circuits Devices Syst., Vol. 142, pp. 74-82, Feb. 1995.

[44]   D. E. Rumelhart and J. L. McClelland, Eds., Parallel Distributed Processing, Explorations in the Microstructure of Cognition. Vols. 1 and 2. Cambridge, MA: MIT Press, 1986.

[45]   F. M. A. Salam, Y. Wang and M.-R. Choi, “On the Analysis of Dynamic Feedback Neural Nets,” IEEE Trans. Circuits Syst., Vol. 38, pp. 196-201, Feb. 1991.

[46]   H. Sompolinsky and I. Kanter, “Temporal Association in Asymmetric Neural Networks,” Phys. Rev. Lett., Vol. 57, pp. 2861-2864, 1986.

[47]   J. Sztipanovits, “Dynamic Backpropagation Algorithm for Neural Network Controlled Resonator-Bank Architecture,” IEEE Trans. Circuits Syst.-II, Vol. 39, pp. 99-108, Feb. 1992.

[48]   Y. P. Tsividis, The MOS Transistor. McGraw-Hill, 1988.

[49]   B. de Vries and J. C. Principe, “The Gamma Model — A New Neural Model for Temporal Processing,” Neural Networks, Vol. 5, pp. 565-576, 1992.

[50]   P. J. Werbos, “Backpropagation Through Time: What It Does and How to Do it,” Proc. IEEE, Vol. 78, pp. 1550-1560, Oct. 1990.

[51]   B. Widrow and M. E. Lehr, “30 Years of Adaptive Neural Networks: Perceptron, Madaline, and Backpropagation,” Proc. IEEE, Vol. 78, pp. 1415-1442, Sep. 1990.

[52]   C. Woodford, Solving Linear and Non-Linear Equations. Ellis Horwood, 1992.

Summary
This thesis describes the main theoretical principles underlying new automatic modelling methods, generalizing concepts that originate from theories concerning artificial neural networks. The new approach allows for the generation of (macro-)models for highly nonlinear, dynamic and multidimensional systems, in particular electronic components and (sub)circuits. Such models can subsequently be applied in analogue simulations. The purpose of this is twofold. To begin with, it can help to significantly reduce the time needed to arrive at a sufficiently accurate simulation model for a new basic component—such as a transistor, in cases where a manual, physics-based, construction of a good simulation model would be extremely time-consuming. Secondly, a transistor-level description of a (sub)circuit may be replaced by a much simpler macromodel, in order to obtain a major reduction of the overall simulation time.

Basically, the thesis covers the problem of constructing an efficient, accurate and numerically robust model, starting from behavioural data as obtained from measurements and/or simulations. To achieve this goal, the standard backpropagation theory for static feedforward neural networks has been extended to include continuous dynamic effects like, for instance, delays and phase shifts. This is necessary for modelling the high-frequency behaviour of electronic components and circuits. From a mathematical viewpoint, a neural network is now no longer a complicated nonlinear multidimensional function, but a system of nonlinear differential equations, for which one tries to tune the parameters in such a way that a good approximation of some specified behaviour is obtained.

Based on theory and algorithms, an experimental software implementation has been made, which can be used to train neural networks on a combination of time domain and frequency domain data. Subsequently, analogue behavioural models and equivalent electronic circuits can be generated for use in analogue circuit simulators like Pstar (from Philips), SPICE (University of California at Berkeley) and Spectre (from Cadence). The thesis contains a number of real-life examples which demonstrate the practical feasibility and applicability of the new methods.

Samenvatting
Dit proefschrift beschrijft de belangrijkste theoretische principes achter nieuwe automatische modelleringsmethoden die een uitbreiding vormen op concepten afkomstig uit theorieën betreffende kunstmatige neurale netwerken. De nieuwe aanpak biedt mogelijkheden om (macro)modellen te genereren voor sterk niet-lineaire, dynamische en meerdimensionale systemen, in het bijzonder electronische componenten en (deel)circuits. Zulke modellen kunnen vervolgens gebruikt worden in analoge simulaties. Dit dient een tweeledig doel. Ten eerste kan het helpen bij het aanzienlijk reduceren van de tijd die nodig is om tot een voldoend nauwkeurig simulatiemodel van een nieuwe basiscomponent—zoals een transistor—te komen, in gevallen waar het handmatig vanuit fysische kennis opstellen van een goed simulatiemodel zeer tijdrovend zou zijn. Ten tweede kan een beschrijving, op transistor-niveau, van een (deel)circuit worden vervangen door een veel eenvoudiger macromodel, om langs deze weg een drastische verkorting van de totale simulatietijd te verkrijgen.

In essentie behandelt het proefschrift het probleem van het maken van een efficient, nauwkeurig en numeriek robuust model vanuit gedragsgegevens zoals verkregen uit metingen en/of simulaties. Om dit doel te bereiken is de standaard backpropagation theorie voor statische “feedforward” neurale netwerken zodanig uitgebreid dat ook de continue dynamische effekten van bijvoorbeeld vertragingen en fasedraaiingen in rekening kunnen worden gebracht. Dit is noodzakelijk voor het kunnen modelleren van het hoogfrequent gedrag van electronische componenten en circuits. Wiskundig gezien is een neuraal netwerk nu niet langer een ingewikkelde niet-lineaire meerdimensionale funktie maar een stelsel niet-lineaire differentiaalvergelijkingen, waarvan getracht wordt de parameters zo te bepalen dat een goede benadering van een gespecificeerd gedrag wordt verkregen.

Op grond van theorie en algoritmen is een experimentele software- implementatie gemaakt, waarmee neurale netwerken kunnen worden getraind op een combinatie van tijd-domein en/of klein-signaal frequentie-domein gegevens. Naderhand kunnen geheel automatisch analoge gedragsmodellen en equivalente electronische circuits worden gegenereerd voor gebruik in analoge circuit-simulatoren zoals Pstar (van Philips), SPICE (van de universiteit van Californië te Berkeley) en Spectre (van Cadence). Het proefschrift bevat een aantal aan de praktijk ontleende voorbeelden die de praktische haalbaarheid en toepasbaarheid van de nieuwe methoden aantonen.

Curriculum Vitae
Peter Meijer was born on June 5, 1961 in Sliedrecht, The Netherlands. In August 1985 he received the M.Sc. in Physics from the Delft University of Technology. His master’s project was performed with the Solid State Physics group of the university on the subject of non-equilibrium superconductivity and sub-micron photolithography.

Since September 1, 1985 he has been working as a research scientist at the Philips Research Laboratories in Eindhoven, The Netherlands, on black-box modelling techniques for analogue circuit simulation.

In his spare time, and with subsequent support from Philips, he developed a prototype image-to-sound conversion system, possibly as a step towards the development of a vision substitution device for the blind.