Prompted by an old discussion on Reddit (link expired), kicked off by the question: “Please explain OOP to me in the language of a five-year old”, I found myself musing on the reasons computing seems to stay stuck in the Middle Ages.
It is not for lack of great minds or vision. In fact, I have the impression, the problem seems to be that nobody ever seems to read anything. Why is that? Is there some kind of unspoken consensus that, since we are in the area of computing science which is soooo new and changing so rapidly, that anything older than 1 year should already be considered obsolete?
Let’s catalogue a few of the main misconceptions, shall we?
It is all about automation
Basically, this misconception is foremost in the minds of most people wrestling with IT in the context of business or society. We see computers as a kind of machine. A machine is seen as something that helps us, humans, do things. What things? Well, things we already did, such as computing (the mathematical kind), searching, manufacturing, accounting, writing.
If this is your underlying (and, usually, totally subconscious) assumption, then it is almost impossible to see the hidden treasures (and, yes, dangers, but we will come to that later). You live in a world that does not need to change, really. Or rather I should say: you do not need to change. Everything remains as it was, it is just that computers are doing some of it. You do the same things. It is the old Western metaphor of the inanimate world just being there to serve us, superior and in some ways detached, human beings. Computers, the internet, it’s all just machinery. It is like a steam engine or clockwork. More complex perhaps, but intrinsically the same.
Older than 1 year is irrelevant
We live in an era of change. At least that is the slogan. Constant change, and we are wrestling to keep up. As individuals, as enterprises. We are constantly introducing “new” ways of coping with those changes, we are officially adhering to the religion of change.
And since these changes are so ubiquitous, we can not rely on the past anymore. Past knowledge or experience no longer applies, so anything thought of in the past has become irrelevant.
Computing is an independent discipline
Well, for that matter, I have a feeling that any discipline is seen as an independent one.
Did any of you read Isaac Asimov’s book “Foundation”? Maybe not. The book started to take form in 1942. But it talked about a problem Asimov, as a biochemist, was acutely confronted with: the splintering of scientific disciplines akin to the blind men and the elephant. Each discipline functioning in a silo, and the scientific community endlessly repeating what other scientists have said earlier.
Ethics don’t really come into play
When I started studying physics and mathematics on the university of Utrecht, The Netherlands, I was shocked to find that I was the only student in my year taking on a parallel course in “Philosophy of Science”. I could not understand how my fellow students thought they could be effective scientists (I must admit, at the time my main ambition was to be a famous scientist) if they were not taking into account the broader view, in fact as broad as is feasible for a mere individual.
It’s about data
This is one of the misconceptions that I worry about most. It creates an endless stream of misery and problems (it’s too much, there are privacy concerns, and all this stuff about semantics, how to structure it, to name a few). Why is it about data? Or, still awful but slightly less so, about information? There is already too much information! (maybe you like my article on The Inversion of Big Data, or, about the undervalued distinction between data and information: Business Intelligence, an alternate definition)
The solution has been around for a long long time. Almost unnoticed, although not really because a malformed and drilled-down interpretation of it is what you currently see all around: personal computers are a direct offspring of one of the most misunderstood projects of the past century, done by the Learning Research Group at Xerox PARC.
A central concept that was realised by that group was something called a live domain. In fact this domain, which bootstrapped for the first time in October 1972, was not just live in the domain part, but everything was live: “turtles all the way down”: the operating system, the application development environment, even the compiler! Maybe the world is not yet mature enough to embrace that concept wholly, but a part of it is, and it is about time too.
That part is where the business logic or domain logic of the architecture lives.
Every architecture is wrestling with the problem of where to put the business functionality. Even how to discover that functionality is a problem, resulting in an endless stream of books purporting to offer help.
The centre of any logical architecture should be a simulation model of the organisation. What do I mean with a simulation model?
It is executable and able to run independently from other components (such as a database or a front-end)
“independently” is implemented by “connecting” those components with event publishing on state changes
It “reflects” the real organisation (but there is something magical in the reflection: The Mirrored World)
It is time-aware: changes of state are events on one or more timelines (time warp should be possible!). Every state is time-bound!
It is quite a different thing to have a simulation model instead of a reified data model, which is what almost every organisation currently has. Not even an information model, mind you, but a data model. It is left to the viewer, the user, through a handicapped tool called a front-end, to make something of the mess, hopefully approaching something that can be called information.
I will leave it up to the reader to come up with the infinite possibilities that will be exposed if you do this – I will come back to you on this in a later article. In the meantime: why did I show the picture above this article? Anyone?
SmallSim is a business simulation tool. It helps building complex business models, and then execute these models. With SmallSim you create executable models. The important differentiating characteristic of SmallSim is that SmallSim is not only executable, but it executes inside a full-fledged simulation environment.
These business models can be the centre of a business-centred software architecture for enterprise systems (also called the Business Domain or Domain Component).
Using these models you can keep these systems aligned with the business while the technology with which these systems are built changes. Now this technology is J2EE, last year it was Cobol, and next year it will be whatever. As you can read in my article on Business-Centred Architectures there are many advantages in using a business-centred architecture. The base characteristic of such architectures is the strict uncoupling of the business domain components with technological components. This results in a robust and scalable architecture.
The following description is mostly based on a version of SmallSim of several years ago. We are working on a new version which will be multiplatform again (SmallSim became Windows-only for usability reasons that seemed valid at the time) and has a vastly improved look and feel. However, time and money constraints make it impossible to predict when this version will be available.
SmallSim originated from a research project on the University of Groningen, The Netherlands, at the Faculty of Management Science. It was also used in the curriculum for business modelling and simulation for a few years.
Key features of SmallSim
SmallSim is a generic simulation environment for building simulation models
SmallSim offers improved and scalable modelling possibilities due to the consistent use of object-oriented modelling
SmallSim offers full stochastic features such as probability distributions
SmallSim enables you to build complex models quickly and simply, many times faster than traditional simulation solutions!
SmallSim was originally developed in close co-operation with the Faculty of Management of the University of Groningen, the Netherlands, with the main champion dr. A.C.M.A. (Ap) Rutges who used the tool with his lectures on business modelling.
Dr. A.C.M.A. Rutges, from the Faculty of Management, University of Groningen
Students of the course BMT-7 (in 1995-1996), who were (mis-)used for testing the product extensively!
SmallSim’s main strength comes from the fact that it models problem domains in an object-oriented way. In this respect it differs from the main simulation modelling tools such as Aris, ®Arena or BP$im™ and Taylor II. These tools take a process-oriented approach, which also is the approach taken by current standards such as BPMN (Business Process Model and Notation). The best thing to compare SmallSim with is not with these tools and standards, but with object oriented modelling languages such as UML. The difference is however that:
SmallSim models are executable with full stochastic capabilities – we prefer to say they are simulatable
execution can be monitored to provide rich statistical information and analysis
strategies can be evaluated in advanced scenario factories
Many process-centric tools, for example for workflow modelling, offer simulation-like facilities. In fact we ourselves built such an modelling tool, LogSim. They enable you to build “virtual” organisations and run the simulation to see how well your design holds under “realistic” circumstances. Well, these circumstances are actually not so realistic, as any simulation expert will be able to tell you. Realistic simulations entail full stochastic behaviour, taking into consideration many different kinds of error-handling (for example to allow for run-in periods to dampen fluctuations in start-up situations). It is possible to achieve comparable results with well-crafted simulation tools. Our extensive experience with advanced simulations has shown us as much. But this can only be achieved by experienced simulation experts, and the resulting models often are complex and very hard to modify or tailor to changing conditions. The resulting models are also hard to explain to business experts.
Another Dutch company, BWise, attempted to develop tools along comparable lines. The difference, again, lies in the emphasis reflektis puts on modelling from an object-oriented and component-based perspective, compared to which the half-hearted but probably well-meant attempts of BWise fall short in many respects.
In SmallSim, you define object types. For example in modelling a mail office, an object type or class could be a Customer Teller (see illustration).
The class Teller contains the description of its instances, objects that are sometimes created and destroyed during the run of a simulation, or live during the entire simulation. Three aspects of objects can be specified: Arrivals, Attributes and Tasks.
An Arrival is a specification of the lifetime of the object: is it always available? Or is it entering the simulation, and if so, with which arrival schedule? Stochastic arrival times can be specified here. You can also specify arrivals that are drawn from real world measurements.
Attributes are slots containing values, which can be numbers, strings, dates and so on. These attributes are available during the lifetime of the object, and can be used in the behaviour specification, described in Tasks. They remember the state of the actual object, the instance that is living inside the simulated business world. For example it could be the age of a product, used to determine the chance of breakdown.
A Task is the actual behaviour of the object, as you can see in the illustration above. It contains a list of activities, which will be sequentially executed by the object during its lifetime. The list might contain one or more loops, which implies that the object will remain executing those tasks while the loop condition is true (potentially forever), or jumps (analogous to a goto).
There is a quite extensive list of standard activities to choose from, but SmallSim is completely extendible. You are free to specify custom activities, which are written using the powerful Smalltalk language. In the code for these custom activities you have access to the attributes and other activities of the object, as well as the direct collaborators of the object, i.e. those simulation elements that are connected with it on the canvas. In fact this creates infinite possibilities for creating models that can be as complex as you want them to be, and since the Smalltalk scripting language is exceptionally suited for describing complex business behaviour (Smalltalk has sometimes been designated as “the DSL of DSL’s”) you can do this in the most simple way.
A Comment contains a textual description that can be used as documentation of the object.
These classes are placed on a canvas, as you can see in the first illustration. After placing these basic building blocks on the SmallSim canvas, behaviour and properties are defined for each class object, as well as relations between the elements. This is done in exactly the same way as object-oriented modelling is done. Several approaches can be chosen, we prefer the Responsibility-Driven Design Approach (Rebecca Wirfs-Brock) combined with CRC sessions to model the behaviour of each object. In these sessions we can build the model live, with immediate feedback to the business experts, to validate the model and play with it.
Definition of behaviour in SmallSim is characterised by the addition of stochastic variables. Methods or operations are specified with a certain duration. In a method many actions can be specified, such as:
Acquire, release and consume fixed and shared resources
Wait for a specified duration (hold)
Wait for or signal specified conditions that can be shared among many objects (i.e. a signal)
Specify certain sections of tasks inside a loop
Specify any action using scripts with many templates to choose from, or write your own script
Create new objects or resources
Stop the simulation
Below you can see the tasks list of a Customer object. The loop element of the list is selected. Not only can you specify behaviour of active objects, you can use attributes as information needed for an objects´ behaviour. For example when an objects´ age has reached a certain amount, another path of action is taken (inside a loop or switch).
An important aspect of active objects is their arrival. Arrivals of objects are specified according to stochastic patterns. Arrivals can be stochastic, but also fixed, or upon other events. Each active object can create other active objects (or schedule them to arrive in the simulation at a specified time).
With these predefined behaviours you can model almost any problem, but the powerful scripting language enables you to write your own scripts for behaviour of any complexity. Since the scripting language is Smalltalk, the full functionality of a full fledged programming language is at your disposal.
SmallSim distinguishes between active objects (of which a property dialog is shown above) and passive objects. Passive objects are used as resources in the simulation. Resources are used to synchronise behaviour of active objects. The active objects compete for access to the resources, and the resources specify whether concurrent access is allowed and how (waiting line behaviour).
After specifying the structure of your problem domain, the next thing to specify is the run configuration. This entails setting up special runs in batch- or interactive mode, and specifying what information you want to gather. After running the simulation, you can inspect the gathered information, for statistical analysis, and to further fine-tune your simulation runs.
SmallSim supports all relevant probability distributions, all of which built on industry-strength random number generators (a shortcoming of many comparable tools, because they often re-use random numbers generated from the standard programming language libraries in C# or Java). The following distributions are available:
Note: LogSim is not to be confused with the British company Logistics Simulation Ltd., or with a log-making simulation program from HALCO Software.
LogSim is a process mining tool. It aims to help in creating executable business process models. LogSim is a business process modelling tool with integrated simulation facilities. In LogSim you not only build business models, but you execute them, you let them run while the software gathers all kinds of metrics to help you evaluate your models for effectiveness, cost, speed etc. The models help in detecting bottlenecks in your current solution so that you can improve your business performance.
Business process models are important in many contexts. They are used for optimising businesses, as tools for reaching a more mature level, as input for creating SOA’s, etc. Most service oriented platforms, such as Tibco, Cordys, SAP and WCF or BizTalk require some kind of process models, and they usually offer their own environment in which you can specify them. However, those environments do not help in actually measuring the performance of your solution. Before you can do that with any level of accuracy, you need to implement the process, run it for a minimum amount of time, and gather metrics. With LogSim you don’t need to do that. With LogSim you can build your models, play with them, and when you are satisfied your solution can perform according to your requirements, you can implement them on your platform of choice.
Currently LogSim is a proof-of-concept. The product does not offer functionality you might wish, such as export to BPEL. Also support for BPMN 2.0 is not fleshed out yet, although it is on the roadmap for version 4. It still is a one-man project so our resources are (yes, we are looking for funding) limited. However we believe we already offer more, especially in creating high-quality process models, than others, so we invite you to give LogSim a try.
LogSim originated from a research project that took place in 1993-94 on the University of Groningen, The Netherlands, at the Faculty of Management Science. It was a collaborative effort with Moret, Ernst & Young, a management consultancy organisation that wanted to provide their clients with a tool that enabled them to model their business processes. For this they had developed a modelling language called Logistic Model and Notation (LMN), but they had envisioned that “just” making models and pictures was not enough: to build robust business process models you need to be able to run them, using a robust simulation environment that supports scenario’s, stochastic events, statistical monitoring etc.
In the course of 1993 and 1994 various versions of LogSim were produced, showing a gradual increase in understanding the power of combining a modelling language (which already existed and was used extensively, although only as a notational tool) with a simulation environment (which also existed in the form of SmallSim, a generic simulation environment that I developed). The 1.x versions were multiplatform and were actually developed on a Macintosh. The 2.x versions were Windows-specific for various reasons that seemed valid at the time.
In 2011 we rebooted the project, which resulted in LogSim 3 (a new version still using the original Logistic Model and Notation). We also started work on LogSim 4 (which does not use LMN anymore as the notation but an extended version of BPMN 2.0). These versions are again multiplatform and available on all supported platforms.
Currently the focus (as far as focus goes…) is on LogSim 3, so that we will flesh-out the simulation engine and statistical tooling before changing the notation.
LogSim wants to help in creating realistic models. Realistic models are models that reflect the actual complexity of real-world processes, in which stochastic events, uncertainty, waiting lines, in other words, chaos, make most models unusable in practice.
Also LogSim wants to help the modellers in adapting models to reach goals like cost reductions and optimal efficiency. For this a model is not enough: you need to evaluate whether you are really choosing the correct strategies to reach those goals by playing with the different options and seeing the result of your decisions. Adapting your models should be as lightweight and simple a process as possible.
LMN (the modelling language used) was a remarkably prescient predecessor (10 years ahead!) of BPMN (Business Process Model and Notation). Most of the building blocks of LMN can be mapped to BPMN elements very nicely, as we will explain elsewhere. But LogSim contains on top of these modelling facilities a complete and robust simulation environment. Using this environment you can actually run your processes, and validate them against realistic dynamic behaviour, such as waiting lines, stochastic arrivals etc. With LogSim you build executable business process models.
This is different from the executability BPMN offers, and the various vendor tools contained in the products mentioned earlier, such as Tibco. That is because of the high level of realism that is close to what you will encounter in the “real” world. Executing a BPMN model in a service bus is actually something completely different than simulating that model.
Work is in progress for version 3 and 4. While version 3 will still only support LMN, version 4 will use an extended version of the BPMN 2.0 notation and syntax in order to be more compliant to the standard as defined by the OMG. BPMN as such lacks a few essential elements to make it complete for simulation purposes, such as Waiting Lines so we needed to find a solution for those issues that are a much in line with BPMN as possible.
LogSim 4 will make it possible to build executable and simulatable BPMN models. Many environments and tools promise this, we know, but we believe we make a difference, because our approach is different. Simulation in LogSim is not an add-on to a modelling tool or language, but the modelling tool is built on top of a robust simulation environment, containing robust simulation features such as:
run scenario building with bounded parameters
unattended runs and information gathering
industry-strength stochastic capabilities (probability distributions, random number generators)
The same simulation engine was used in another tool we created, SmallSim.
We hope to distribute the software free-of-charge again. Beta versions will be made available for those brave souls willing to participate in early reviews.
We are mainly working on version 3. This is a new version of LogSim, however still using the original LMN for its language. We do this because we believe that the modelling language LMN (short for Logistic Model and Notation) still has many interesting features, and because at the moment our time-resources are very limited and bringing out this version can be done relatively fast, while the work for version 4 will take longer. Remember, LogSim is a spare-time, one-man project at the moment!
LogSim 3 is multiplatform and available on all supported platforms. In this case multiplatform really means what it says: it run bit-identical on Windows, Mac, Linux and others. You can open and work on the same model on all those platforms.
LogSim 3 uses a business process modelling language called the Logistic Model and Notation (LMN). This language may be regarded as a predecessor of BPMN with some interesting features that made it especially suitable for simulating complex logistic processes. The main distinguishing feature is that LMN contains the concept of Waiting Lines, with full stochastic support.
The walkthrough is using the LogSim 3 engine. The screens are produced on a Macintosh running Mac OS X, but LogSim 3 will run on all supported platforms, and will adopt the look-and-feel of the platform you use. The screen shots are already several years old, modern versions look somewhat better I assure you!
Logistic Model and Notation takes objects (defined with various properties) as central concepts, and models the flow of these objects through many stations or Transformators in a business.
These objects are called WorkOrders. In the real world these can be documents, actual products on which work is done (reflected by the changing attributes of the products during the flow), or abstract objects that you only use as placeholders for the process flow steps. The corresponding concept in BPMN is the Token, however the difference with BPMN is that in LMN a WorkOrder can have an indefinite set of properties which can and will be modified during their processing in the execution flow. And what is even more powerful: the modification of these properties is specified in stochastic terms as well!
Each Transformator is a metaphor of a workplace, where one or more Transformers (think of Transformers as your work force) work on these objects. This work is modelled as changing one or more of the object’s properties, as explained above. This is an extremely powerful feature, and the most distinguishing one of LogSim.
Transformators correspond to the Tasks of BPMN.
The objects flowing through the model are instantiated by external agents, that send events into the organisation we are modelling. These events are characterised by the instantiation of these objects, with their default properties. External agents can also be subscribed to receive events from within the model, on certain moments.
Objects are created by Events. We can see here an initial event called SWIFT-payment, creating objects every 25 minutes according to a stochastic distribution (in this case an exponential distribution).
The objects created by this event have a set of characteristics, that you can see above. LogSim 3 supports 2 attributes, and an unlimited number of them (as long as the total of percentages is 100 percent of course). Each of these can be assigned a due date, that is we specify that in order to comply with our business rules these objects should be processed after this period of time.
Finally we can specify how many objects or work orders are created. Again, this quantity can be specified as a stochastic distribution. In this case just one, as a fixed quantity, is produced. But it could just as well have been any distribution quantity.
External Agents correspond to the Events of BPMN, originating from other parts of the process environment, called Pools in BPMN.
These workorders are first dumped into the first waiting line, SWIFT-payment arrival in this case. Waiting lines are interesting, and very important modelling constructs that are completely overlooked in BPMN. They represent the stochastic robustness of modelling in LogSim: it is precisely the stochastic aspect of reality that many modelling techniques fail to address properly, and which LogSim handles well. Waiting lines do the trick.
BPMN explicitly chose to model only processes, and view waiting lines (or queues) as data structures. This is why there is no explicit way to model waiting lines in BPMN. The only way to “simulate” them is to create a specific Pool to dump the workorders in, and processes that add to and take from the Pool.
For the simulation facilities in LogSim waiting lines are very useful because they will gather any statistics you deem interesting, which helps a lot in determining the optimal process configuration.
In the Waiting Line shown here, there are no workorders waiting, but since we have done a few runs many workorders have passed through the waiting line. Click on the bar chart button and you will see a histogram of the period workorders have been waiting inside the waiting line. We can also see that 168 workorders have gone through this waiting line:
This information can be useful in determining whether your business process is optimal or not.
A Transformator is a certain activity or work on an object during a specified time interval, here a Uniform stochastic distribution between 0.3 and 0.6 minutes. During the work in this specific Transformator no attributes are modified. However, these could be added to the list of transformations under “Actions”. Objects can have any number of basic scalar attributes, such as Strings, Numbers or Dates.
The actual work is not done by the Transformators, but by Transformers. An organisation (the containing entity in which the business processes are run) defines any number of them. These Transformers, or the work force, can be persons or other actors such as computers or machines. They are allocated to one or more Transformators using a time schedule with priorities. For example, a desk employee is allocated to the front office during work hours with a priority of 60%, but to a manager with a priority of 70%, which implies that when the manager needs the employee, he or she is forced to stop working on front-office tasks. The activities performed by the workers can be monitored and statistically analysed, in order to be able to make statements about the efficiency of a process scenario. The next illustration shows the list of Transformers of an Organisation. The selected Transformer named “Central Computer 1” is allocated to two Transformators, “Receive SWIFT-payment” and “Book on VV-account”. This is done by selecting from a drop down list of currently defined Transformers (the blue rectangles in the model).
Properties window for the workers in the model. This shows the list of workers, as well as the list of scheduled transformations.
What makes LogSim special, and able to closely resemble the complex reality you want to model, is the fact that you can specify special stochastic flow controllers, called Logistic Regulators. These objects can do things to your object flow like:
Switch: switching on certain conditions (effectively branching the flow, or routing it)
Splits: splitting objects to enable object copies to go through a different flow
Joins: joining objects at specified points in the flow
These components correspond almost one-to-one to the Gateways in BPMN.
Above you can see a Logistic Regulator, here a Switch, which routes objects (or work orders) based on a stochastic distribution. When the draw (similar to throwing a dice) from the distribution indicates success the object is routed to the node named “prepare, relay”.
The flow of objects is continuously monitored, which enables you to analyse statistically what happens. Running different models can be used as a simple scenario analysis.
Below you can see an example of a finished model, modelling international payments through SWIFT.
The application window showing a completed logistic model from which the previous illustrations are generated. The triangles are other flow controllers, called Splits, which effectively split the flow of control. Usually these Splits are later combined in controllers called Joins. The Join effectively combines objects into one new object, while keeping the accumulated flow information about the original objects. No information is lost.
Click on the thumbnail to see a finished model of a business process, in this case the processing of foreign bank SWIFT payments to a Dutch bank. The following object types can be recognised:
Waiting lines (queues)
Transformators (where Transformers do their thing)
Joins (a Logistic Regulator combining objects from several workflows)
Splits (a Logistic Regulator splitting objects across several workflows)
Switches (a Logistic Regulator containing logic to decide how to route objects)
Workflows (this is the route taken by objects)
Events (messages sent out to External objects)
Messages (messages sent in from External objects)
Every stochastic aspect in the system is controlled by high quality random number generators. Both the random number generators as the stochastic distributions are built using the latest scientific research in that area.
On the right side you can see the random generator window, where you can specify the random generator you want to use for a specific stochastic process. Several algorithms are available, and you can also provide your own file with “real” random numbers, or numbers obtained from event logs.
Random generators are the input for the broad spectrum of stochastic distributions that control the real events in the model:
and various custom distributions
LogSim has originally been developed in close cooperation with Ernst & Young Management Consultants. The Logistic Model and Notation (LMN) is published by the project members from Ernst & Young, as far as I know only available in Dutch:
Drs. N.A. Brand, ing. J.R.P. van der Kolk: Werkstroomanalyse en -ontwerp: Het logistiek vriendelijk ontwerpen van informatiesystemen
This article (originally published in Dutch) argues that object-orientation is the only viable alternative for modelling complex systems. Additionally it explains that the better metaphor for doing this is derived from biological systems.
We are sorry, but the article has not yet been translated into English. If you want to read it in Dutch, please go to: Ecoology
This question was asked on Stackoverflow and ModelingLanguages and prompted me to attempt to make some persistent preconceptions about UML clearer. First of all: UML is not about modelling object-oriented software.
Origin of object-orientation
But maybe we should go back to what object-orientation is. OO (shorthand for object-orientation) is invented around 1970. Xerox had a group called the Software Research Group which was part of a think tank created to do research into the possible threats of the modern computer for Xerox’s prime business: copying machines. This group invented in a short period of years almost everything around what we now call the modern computer: displays with bit-mapped overlapping windows, a keyboard with a mouse to manipulate the objects on the display, icons to represent various types of information, and even the network to link all those computers together called ethernet.
To create the complex software that was needed to run those personal computers, an object-oriented programming language, as well as by the way an object-oriented operating system, was deemed necessary. Alan Kay originally coined the term “object-orientation” although he later stressed that a better term would have been “message-oriented” since he envisioned a complex system of interacting elements creating complex behaviour by sending messages to each other. For more info on this original vision please read the august 1981 Byte magazine devoted to Smalltalk.
The assumption was that we needed a powerful new way of thinking about problems, to enable creating multiple orders of magnitude more complex software. But you see, this was not just about software. It was about a paradigm that helps in managing complexity. OO was just that, and it still is.
Origin of UML
When the UML effort started it only tried to merge a multitude of approaches that helped in visually representing those OO programs. So UML is not so different from OO. It is just a view on the same thing: a complex system.
UML introduced something new, however, and that was the meta model. Mainly for the tool developers, this meta model helps in designing the power of the OO modelling paradigm itself. It defines classes and metaclasses, properties and associations (as access paths for message passing).
The metamodel of UML is extensible. You can extend the metamodel with Profiles, effectively creating a specific set of language elements with a tightly defined semantics for a specific problem domain. This should not be confused with Domain Specific Languages (DSL’s), because a DSL specifies a set of elements or building blocks in the domain, for example the financial domain. A UML Profile contains the semantic definitions of the syntax used to describe those domains. For example you might create a Profile for Entity Relationship modelling. And you might define a Profile for functional languages.
One of my first endeavours when I learned object-oriented programming was to create a planetarium. This has been a hobby of mine all my life. To simulate the movements of bodies in the solar system a mathematical model is used. The orbit of, say, the moon can be described with an equation with a lot of variables (to approximate the orbit, since there is no analytical solution of the many body problem in physics, that is until recently). My first thought was, well this is mathematics, I probably will have a hard time moulding the mathematical equations into objects and methods and messages. But to my delight I found this was not the case at all. Once I realised that my problem domain was mathematics, and specifically equations, the follow-up was easy and everything fell into place perfectly. I had Equation objects, CelestialBody objects using those to tell their location, and time nicely proceeding helping the celestial bodies to move.
To summarise: object orientation, and UML as a domain-independent language, can be used to describe any problem domain efficiently, and help with the complexities in those domains. And you are free to implement your solution in an object-oriented language like Smalltalk, or a functional language like Haskell. OO is domain-agnostic, and implementation-agnostic.