logo

 

     
 
Home
Site Map
Search
 
:: Bitwise Courses ::
 
Bitwise Dusty Archives
 
 
 

rss

 
 

ruby in steel

learn aikido in north devon

Learn Aikido in North Devon

 


Section :: Features
- Format For Printing...

The Great OOP Debate

Object Orientation - triumph or failure?
Monday 1 January 2007.
 

Dermot Hogan, a long-time OOP-sceptic, is now the lead developer of an OOP IDE. Here he debates the pros and cons of Object Orientation with OOP-convert, Huw Collingbourne…

Huw: Back in June 2005, you wrote a column in which you argued that Object Oriented Programming (OOP) has turned out to be a failure. In the period since then, you and I have been working on the Ruby In Steel IDE written for one object oriented language (Ruby) and written (largely) in another object oriented language (C#). Is this an admission that you got OOP all wrong when you criticised it before?

Dermot: A little background first. I’ve been working on a Visual Studio ‘package’ which adds Ruby editing and debugging support to Visual Studio 2005. Visual Studio, in its current incarnation anyway, is based on COM interfaces. COM is object oriented in a way, but isn’t object oriented in the modern sense in that it doesn’t use inheritance. I’ve implemented the Ruby part using a Microsoft-provided tool – the fully object-oriented Managed Package Framework or MPF - and the debugger part by connecting up C# code to the COM interfaces. So I’ve used both object orientation with inheritance (the MPF) and the older and simpler COM ‘encapsulation’ extensively.

In working with the MPF, the main difficulty has been the sheer complexity of figuring out what was going on (the thing they don’t tell you about writing packages for Visual Studio is that is damned hard work!). It wasn’t made any easier by having to work through a class hierarchy. At the end of the process, I haven’t derived a lot of things from the original MPF, but instead, I copied it and modified it (extensively) to make it do what I wanted. I thought long and hard before doing this – but it was the right decision.

But further than that, I ended up duplicating a lot of what I had already copied because I needed two ‘languages’ – Ruby and ‘embedded Ruby’ in Rails template files. I did manage to leverage some of the supposed benefits of OO in that I didn’t have to duplicate everything, but that was offset by having to track down some deeply subtle bugs in the MPF hierarchy which were mainly due to me using the wrong version of an object – the non sub-classed version rather than the derived version for example.

In contrast, the COM debugger implementation was more straightforward. There was no MPF to help – but equally, the MPF wasn’t in the way either. Now, of course, there were other problems as anyone who has tried to connect C# code up to a COM system will tell you (I spent a week tracking down memory leaks), but on the whole, I far preferred doing the COM stuff to hacking my way through derived types and classes.

So to sum up, my experience over the last year has confirmed what I’ve observed and experienced over many years – Object Orientation is vastly overrated and overused. The MPF isn’t bad – I suspect it’s fairly typical of such things, but I honestly feel that it would have been quicker to have been presented with a simple COM implementation ‘template’ and been left to modify as required. In the end, I think this would have been the faster route.

This has been reinforced by observing the difficulties others have had in using the MPF. All you have to do is look at some of the posts in Microsoft’s Visual Studio Extension forum to see what I mean.

Huw: I said just now that both Ruby and C# are object oriented. In spite of that, they are very different languages. In fact, most ‘mainstream’ OOP languages such as C#, Delphi, Java and C++ take a pretty laid back (or maybe that should be ‘sloppy’?) approach to OOP. C++ and Delphi are a mishmash of procedural and OOP; Java and C# are better but still seem to have cherry-picked bits of procedural languages and mixed them up with a few OOP ideas.

For example, one of the key ideas of object orientation is encapsulation. But most OOP languages implement just one bit of encapsulation – by binding methods into classes – and forget the other bit: information hiding. In C#, for example, it’s entirely up to the programmer whether or not the variables inside an object are accessible to code ‘on the outside’.

I can’t figure out why so many of the people who design OOP languages don’t seem to think information hiding is important. If other programmers can poke around inside your objects, your encapsulation is hopelessly broken. Some people say, well, heck, you can make variables private. But with a language like C#, you can never be certain that other in a programming team people aren’t writing code that depends on the implementation details of classes. One of the great ideals of OOP is supposed to be that each object is self-contained; its implementation details are hidden and the only way to access data is via well-defined interfaces – data goes into a method in one place and is returned at another place. But when data hiding is not enforced, you can never be sure that this will be the case. That’s a big loophole, isn’t it?

Dermot: I’ve never been that bothered about data hiding. It’s a nice to have but when you are at the sharp end of trying to figure out what’s going on its not that important.

Huw: Really? But how about when you are working with large teams? I know you’ve worked on some big projects in the past – international banking systems and whatnot. Surely it would have been enormously useful to have the sure and certain knowledge that the stuff you intend to be private – the implementation details of all your coding – really does remain private.

In my view, it is simply neater, safer and more damned elegant when programs are divided up into neat chunks with clearly defined routes of communication. Whether that’s done via modularity as in Modula-2 or encapsulation as in Smalltalk, doesn’t really bother me. In my opinion, data hiding is one of the great ideas of programming to which many people have paid lip service but which no mainstream language takes seriously or does thoroughly. That goes for Ruby too. There are all kinds of odd ways in which you can poke about inside an object or retrieve a value from a method which the person who wrote that method might never have intended you to use. As far as I’m concerned, if you are going to do encapsulation, you should do it properly or not at all.

Dermot: That’s all true, but it comes back to building interfaces to do the data hiding. Of all the projects I’ve worked on, the best was a client-server dealing room system which was based around a set of interfaces that I specified. There was no way that the individual programmers could see behind these interfaces and so they didn’t fall over one another. Not only was the data hidden, the internal wiring was as well. From time to time, I had to re-specify the interfaces, but because the interface was the ‘terminal’, so to speak, you didn’t have the OO problem of the changes propagating down the OO hierarchy like a demented hacker on drugs. This was some time before COM was invented by the way.

Huw: So what do you think are the main benefits of OOP? Encapsulation? Inheritance, polymorphism? Something else…?

Dermot: Ha! The main merit seems to be that ‘gurus’ can invent an ‘ology’ and stick their names to the front of it! I’ve come across some semantic drivel in my time, but OO books are by far the worst. More seriously, I think that OO works well for ‘frameworks’ like the .NET Framework and it’s also not bad when you need to make small changes to existing programs. But most programs simply don’t fall into those categories. I would guess that most programming activity goes into databases of one form or another and the changes to these are not driven by anything approaching OO methodologies. So you have the fundamental problem that the user (often the government) has done a 180 degree somersault leaving your precious OO model high and dry. So what do you do? Rewrite your OO model to reflect the fact that you’re going to the North Pole and not the South? Or bastardize your OO design? Most designers will do the latter (quite simply because the user will not accept that a re-design is required) – with the usual consequences.

Huw: Of all the fundamental OOP ideas, I have to say that the one that’s never really persuaded me is inheritance. True, it has its place. Sometimes it can be very useful (in an adventure game I wrote, it was, indeed, invaluable – though I’m not sure how ‘typical’ of OOP projects an adventure game really is). In some cases, I’d say inheritance causes more problems than it solves. I remember wasting a good morning’s work writing some features for Ruby In Steel this summer only to discover subsequently that the same features had already been implemented in one of Microsoft’s base classes. I didn’t know about that because the base class definition was deep inside another file in another directory. It was only when I went back and hunted around in that code that I realised that my code was repeating something from its parent class. I’m sure this must be a common problem. Have you fallen into any traps like that?

Dermot: Actually, I don’t think that’s an OO problem. Rather, it’s a problem of intrinsic complexity. But where OO makes it worse, is that people think that they are making things simpler for the end-user by burying the logic somewhere. In fact, they are just digging a deeper hole.

Huw: I have to say that it seems to me that an awful lot of supposedly ‘OOP’ code these days is still written in a traditional – ‘procedural’ – style. By that I mean that classes often tend to take the place of what you would call a code library in a procedural language. They are used to wrap up vast amounts of loosely related code for easy reuse. Methods in C#, say, or even in Ruby, are frequently quite long but the class hierarchy is relatively shallow. Compare that with the way that Smalltalk classes are written. In Smalltalk, methods are typically very short but the class hierarchy is very dense. I guess you could write classes like that in C# or Ruby but the development environments don’t really support that style of coding. In Smalltalk, the code and its environment work together very tightly so when one class descends from another, you can instantly see this relationship in the hierarchy browser. Maybe that’s one of the things that modern OOP languages forgot – the importance of making the language work in cahoots with its environment.

Dermot: I’d agree with you here. I don’t think OO works very well away from an IDE with a first class browser. In fact, if you look at what the designers of Smalltalk built – it was conceived as an integrated whole. The ‘browser’ was just as important as the ‘Smalltalk’ language, the ‘mouse’, the ‘virtual machine’, the ‘windows’ … come to think of it, was there anything that those guys didn’t invent?

Huw: Be honest with me: if you were given a free choice, would you prefer to be working in an OOP language or a procedural one?

Dermot: Well, I wouldn’t like to go back to C++. That’s a nightmare! But I’ve been struck by how well C# works with COM in Visual Studio (much against my initial expectations, actually). The COM interfaces define a cast iron ‘surface’ against which you can program with certainty. Clearly, there are problems – memory management being number 1 and finding out what on earth an arbitrary COM pointer actually implements (I’ll give you 100 guesses) being number 2. But what I’m going to do in Ruby In Steel is rework my major class definitions into interfaces and use the class to implement the interface. That seems to be about right: it’s solid without being too clever.

Also, I wouldn’t want to loose managed memory, but I do like the simplicity of C. I find programming Microchip microcontrollers in C and assembler positively relaxing! But in all honesty the language that I’ve come across that satisfies a) strong interfaces, b) memory management and c) a simple C like syntax isn’t Ruby … it’s D.

There you have it…


Related Features

- OOPS! or: Where Did Object Orientation Go Wrong…?
- Ruby – Hidden Treasure or Flawed Gem?
- The D Programming Language: interview with Walter Bright
- D – C Done Right…?
- Programming Milestones: Smalltalk
- S# - Smalltalk: The Next Generation – interview with David Simmons


AddThis Social Bookmark Button

Forum

  • The Great OOP Debate
    17 April 2007

    An interesting article and intelligent article I thought until I read the phrase:

    "and c) a simple C like syntax"

    on the last line. Isn’t this phase an oxymoron?

    Ralph Boland

  • The Great OOP Debate
    20 March 2007, by Martin

    Hi I came across, delphi many years ago when I came across it in my favorite magazine (PC PLUS), and it was by following tutorials in a column written by an experienced Delphi programmer (mentioning no names) that I learnt my skills to program in OOP (object pascal). I had already spent 3 years programming in pascal (proceedual 286) from college, (quiet many years previously) and getting stuck into Delphi took a while, but was no mean feet with the advice given by the colmnist.

    I get by in visual basic (does anyone remember visual basic version 3 - yuk !!) but the main importance in programming (I humble feel), is that it follows the english language, or your main language as much as it can. Wouldn’t matter if it was called --- "F minor FX .net plus" or whatever so long as such that it naturally followed the english launguage, and acted as an intemidary between computing launguage (assembler) and human speak. I don’t know about ruby, but c++ and c# are definatly not it not a language people like to dive head first into !!

    I can program in delphi and visual basic and don’t think you can distinguish which is better object orientated or procedural based programming. Delphi does (or DID - (I still use it)) both. Why choose between chocolate cake or cheese cake, when you can have a slice of both. The frame work (COM / MS Foundation Classes / VCL / JAVA Bytecodes ?? / .NET ) or the next big thing fades in to oblivion when you realise the main important ingredient is that it does its job. Act as an intermediary between man and machine (or woman and machine).

    I just wanted to say that.

    Huw Collingbourne and Dermot Hogan, I have heard these names before, now where did I hear them before.. ??

    • The Great OOP Debate
      20 March 2007, by Huw Collingbourne

      Huw Collingbourne and Dermot Hogan, I have heard these names before, now where did I hear them before.. ??

      A long time ago in a galaxy far, far away... ;-)

  • The Great OOP Debate
    31 January 2007, by Matthew Huntbach

    I looked at this, and the June 2005 article it referred to where it was written "in the twenty or so years since object-oriented programming emerged from the universities", and I felt I need to make the point - OOP *didn’t* emerge from the universities. Or if it did, it was very much a minority interest, and it was industry that pushed it. I’m a university lecturer who teaches programming and has an interest in programming languages. I was around 20 years ago, and back then most academics were convinced logic programming or functional programming was the future. OOP took us by surprise, and the fact that industry picked up on C++ and later Java and all our fancy logic and functional programming languages got nowhere put academic research into computer languages into a sulk from which it has never really emerged. In fact I find to this day many of my fellow academics are reluctant to accept OOP as more than a fad, and don’t really appreciate the revolution which I think OOP languages did bring to programming.

    I was also interested, as an academic, in your comments in the Ruby article of April 2006 about "the pretty low intellectual demands imposed by so called ‘computer science’ courses at colleges" and "Anyone who thinks that you can learn the principles of good design by studying Java is out to lunch". I can give you some detail on both of these.

    About the "low intellectual demands", academic Computer Science suffers because, apart from a brief boom around the Y2K period, there’s never been a really high demand for it, and schools often see it as a suitable subject to push their less able students towards. As a student once told me "In my community, Computer Science is seen as a subject for thickies, if you’re any good you go into Medicine, or Law, if you can’t make it into them you go into Engineering or Business Studies, only after that do you consider Computer Science as an option". We academics have to fill our places otherwise we’re out of a job, and if we can only do so by taking on people who lack the skills and qualifications to go elsewhere, thats what we’ll do. I think that accounts for any low intellectual standards you see, rather than any lack of drive on behalf of academics.

    I don’t know any Computer Science academic who thinks the principles of good design can be learnt by studying Java. Actually we spend a lot of time trying to persuade the students that’s not the case, and getting them to look beyond narrow programming language issues. However, programming is a useful skill, still to some extent the core of Computer Science, and one that many students seem to find very difficult to pick up. Java was always a compromise language rather than one enthusiastically adopted by academics - some of the top schools still start off teaching in a functional language. I’ve grown to like Java because I can teach the basics of OOP in it, without the low-level mess of C++, and without the complaints from students "Why are you teaching us a language no-one uses in the real world?".

    On the practical problems of OOP, I never saw inheritance as such a big part of it as your article seems to imply by making it the main point to criticise the OOP paradigm. I came into practical OOP using Java myself from a background of research into computation using actors, where there are objects which are concurrent, but no inheritance. I don’t think I really understood inheritance until I was forced to in order teach Java. There’s a growing move in OOP circles, anyway, to use inheritance sparely and to prefer composition and delegation in cases where inheritance might have been considered in the early days of C++ and Java.

    I’m looking at Ruby right now. Actually one of the things that puts me off it is the hype about it, I see all these websites which tell me that programming in it is so easy, almost like writing English, solves all those problems with the complex syntax of languages like Java. And I think "Yeah, yeah, yeah, I’ve heard all this before" - in the hype when Java was first introduced and we were told it was such a natural programming language to use that anyone could pick it up. Actually, trying to teach programming to students who aren’t the brightest really does remind you of the "No Silver Bullet" maxim. I don’t think the basic problems I see with teaching program are those met by Ruby, the real issue is getting to grips with abstraction. Sure, for toy examples Ruby code can look quite neat, but scale it up and the problems of moving beyond syntax to good design remain, and the sheer messiness of Ruby starts to look a problem.

    Still, there are people who I’ve admired in the past who are going for Ruby, so there must be something in it. The rejection of strong typing and all the syntax required to support it has been the big theme in post-Java development of programming languages. As an academic, what interests me is that all the development of practical programming languages is taking place outside the universities. Sure, some of the things we’ve talked about make their way into practical programming languages a long time later - OOP worked in that way. But I think academia has lost its way. A lot of what is supposedly programming research is complex mathematical stuff where you spend a week trying to understand the notation, and then another week trying to understand what they’re doing with it, then you think at the end "oh, is that it?". The consequence of the lack of academic involvement in programming language design is that messy languages like Ruby are going without challenge. I’d like to see more blue-skies thinking on programming languages in academia, a return to the days when research into new programming languages was a major theme in Computer Science. But it was because academic Computer Science got it wrong by NOT being the driving force behind OOP that programming language research lost prestige and funding.

  • The Great OOP Debate
    12 January 2007, by Lurch

    This article (and the others on this site) questioning conventional wisdom on OOP are fascinating.

    I’ve seen OOP contribute important advantages for software development projects, but I’ve also found that there are cases where OOP doesn’t apply. For example, I do a lot of scripting in support of Linux and Unix systems. Procedural code tends to work better here — OOP only complicates the job without offering any benefit.

    When I script in support of Windows servers, I use forms of VB (such as VBScript) which benefit from Microsoft’s Active X/COM/DCOM object model. But as one of your articles points out, this is a very limited concept of an "object model" (no inheritance or other features found in true OO systems.) The "objects" are essentially interfaces into OS services rather than truly programmable objects.

    I’ve been disappointed that when I express a nuanced view of support for OOP to some people, I get a negative, often emotional, reaction. It seems as if those who don’t buy into an absolutist position in favor of OOP are seen as too outside the mainstream to be credible.

    Your articles perform a great service in exploring some of the finer points that otherwise get overlooked when discussing OOP.

  • The Great OOP Debate
    2 January 2007

    “Huw: Be honest with me: if you were given a free choice, would you prefer to be working in an OOP language or a procedural one?”

    “Dermot: Well, I wouldn’t like to go back to C++.”

    Um, since when was C++ not an OOP language?

    • The Great OOP Debate
      2 January 2007

      I happen to agree with Alan Kay’s take on the matter: "I invented the term Object-Oriented, and I can tell you I did not have C++ in mind".

    • The Great OOP Debate
      3 January 2007

      Errm,

      Last time I looked (about 2 minutes ago) C++ can be used as a procedural (and often is) or OOP programming language.

      I agree with Dermot, I am not convinced that OOP is all it is cracked up to be.

      The question I ask any OOP advocate is why ISE Eiffel compiles to C first to create the final application?

      ISE Eiffel is a pure OOP language, but uses C which is a very procedural language to compile its code.

      At the end of the day OOP is not the silver bullet, it is certainly not the answer to all software problems, and neither is any other programming paradigm, at least C++ can be used in an OOP or procedural way or a mix of both.

      OOP is another way to do things and should not be seen as a religion, as some OOP purists believe.

      Even procedural languages such as C can create Object orientated programs, look at ISE Eiffel as an example and check out this web link for those who do/will not believe C can be programmed in an OOP way if desired http://www.accu.org/acornsig/public....

      As programmers we need as many tools as we can find to make our otherwise difficult task of creating and maintaining software as easy and enjoyable as possible.

      I find it strange that in the beginning OOP was sold on its strengths of being able to use inheritance to create reusable software, but in the GOF and other design pattern books I have read, they insist that inheritance should be avoided wherever possible and to use aggregation or composition instead, so that is one OOP pro under debate.

      Most polymorphic behaviours in OOP languages can be implemented in procedural languages such as C very efficiently using function pointers.

      What I do find useful in OOP languages is the property of information hiding.

      I applaud Dermot for daring to challenge exactly how useful OOP really is and what benefits it brings to software development other than for GUI frameworks.

      I use a mixture of OOP and procedural programming to get the job done, and like Dermot I also use D (where permitted) or if I really have to C/C++.

      • The Great OOP Debate
        3 January 2007

        The question I ask any OOP advocate is why ISE Eiffel compiles to C first to create the final application?

        ISE Eiffel is a pure OOP language, but uses C which is a very procedural language to compile its code.

        I don’t think that is a relevant question. Every program ends as some machine code. That doesn’t mean that writing machine code directly is a good way program.

        • The Great OOP Debate
          3 January 2007

          Not a relevant question, I think it is a particularly pertinent question since this topic is the Great OOP Debate No?

          Indeed I agree that in some form or another code is compiled into a machine language of sorts, but C is not machine code the last time I looked not even of sorts.

          My point is that if OOP is supposed to be the answer to the software crisis, why compile to C and then C to machine language as ISE Eiffel does, why not straight to machine language? and why use C when OOP proponents see fit to point the accusing finger at C as being the prime cause of the software crisis in the first place?

          A bad workman always blames his tools.

          It appears that OOP is not living up to its great expectations, and is causing a software crisis all of its own called code bloat! It is also making a number of what can only be described as embarrassing U turns on what were supposed to be its strengths.

          I ask another question, what programming language is *nix windows and Mac OSX mainly if not entirely written in?

          The simple answer? there isn’t a single programming language or paradigm that is best suited to solve all of today’s software problems, and this includes OOP languages which are marketed like they are.

  • The Great OOP Debate
    1 January 2007, by David Heffernan

    Interesting post. Lot’s of what you both said didn’t chime with my experiences, but I expect that’s because I develop a different type of app. And that’s really the nub. You need different horses for different courses.

    Anyway, I did really struggle with your mangling of "your" and "you’re". Please sort it out - it really grates with an arch pedant like me!

    Cheers, David.

    • The Great OOP Debate
      1 January 2007, by Huw Collingbourne

      We slip the odd error in to check that people are reading attentively ;-)

      Fixed now.

      Best wishes

      Huw

      • The Great OOP Debate
        1 January 2007

        Surely "you’re encapsulation is hopelessly broken" still needs some work.....

    • The Great OOP Debate
      1 January 2007, by David Heffernan

      So called arch pedant who can’t even spell "Lots" the plural correctly! Argh!!!

      • The Great OOP Debate
        4 January 2007, by P. Sword

        There are a vast number of words in our lexicon, which can be difficult to remember and easily confused; however, apostrophe rules are simple; hence, apostrophe errors are inexcusable! On the subject of spelling, why bastardise English in the way you do chaps? Oriented? Surely you mean orientated!

        • The Great OOP Debate
          4 January 2007, by Huw Collingbourne

          For many years I shared your prejudice in this usage. However, I now accept that ‘oriented’ is both the more widely used form and is etymologically preferable (‘orientate’ is a 19th Century back-formation from ‘orientation’).

          As an intransitive, ‘to face in some specific direction, originally and especially to the east’ (an ecclesiastical term), orientate is correct; in all other sense to orient is preferable.

          Eric Partridge, ‘Usage and Abusage’

          Huw


Home