RohinGupta

OBJECTIVISM APPLIED TO SOFTWARE ENGINEERING

27 posts in this topic

To the Best of my knowledge, owing to Intellectual bankruptcy of humanities in universities, Pragmatism is by far the most dominant trend in today's Big

Businesses.

May it be most notorious one's like worldcom, and their fabrication of books to get over CURRENT tide. OR huge bonuses being allocated to top and

middle management based on incompletely achieved objectives.

Software Industry is no exception. And the Philosophy has been formalized by Software Engineering Practice called Agile.

While specific computing subjects like Datastructures, Operating Systems, Database, Networking are very well defined.Starting with Agile manifesto in 2001, the books that followed, and the corresponding seminars; today Agile practices are dominant in the Industry.

Besides 4 years Industry experience, my sources for Agile include couple of seminars by firms like agile developer and good agile. An extended argument with the trainer of former and readings of initial chapters of his book.

Though "Pragmatic Programmer" is supposed to be the key book(maybe equivalent of Critique of Pure reason in Philosophy), but I think even without its read, I have sufficient knowledge to grasp the dangers.

Having gone through major Objectivist works and aynrandlexicon on Pragmatism. The consolidated information I think offers me much deeper understanding of the subject, than those who are otherwise better read and more experienced.

While Critique of Agile practices in Software Engineering can be considered as a valid topic; here I confine myself to the positive. That is working to

create productive Software Engineering Practices.

So in coming days, I shall be publishing draft versions of my chapters on "Applied Software Engineering".....

....Comments are invited on the subject.

Looking forward to have meaningful dialog!!

Share this post


Link to post
Share on other sites

PREVIEW

Software today has become a critical component of Economy - specifically categorized in Information Technology OR Knowledge Economy. And since collection and integration of Information into knowledge, followed by tangible actions forms the essence of Technology sector. Therefore, my sources include, "Introduction to Objectivist Epistemology" and "Objectivist Ethics".

Share this post


Link to post
Share on other sites
To the Best of my knowledge, owing to Intellectual bankruptcy of humanities in universities, Pragmatism is by far the most dominant trend in today's Big

Businesses.

May it be most notorious one's like worldcom, and their fabrication of books to get over CURRENT tide. OR huge bonuses being allocated to top and

middle management based on incompletely achieved objectives.

Software Industry is no exception. And the Philosophy has been formalized by Software Engineering Practice called Agile.

While specific computing subjects like Datastructures, Operating Systems, Database, Networking are very well defined.Starting with Agile manifesto in 2001, the books that followed, and the corresponding seminars; today Agile practices are dominant in the Industry.

Besides 4 years Industry experience, my sources for Agile include couple of seminars by firms like agile developer and good agile. An extended argument with the trainer of former and readings of initial chapters of his book.

Though "Pragmatic Programmer" is supposed to be the key book(maybe equivalent of Critique of Pure reason in Philosophy), but I think even without its read, I have sufficient knowledge to grasp the dangers.

Having gone through major Objectivist works and aynrandlexicon on Pragmatism. The consolidated information I think offers me much deeper understanding of the subject, than those who are otherwise better read and more experienced.

While Critique of Agile practices in Software Engineering can be considered as a valid topic; here I confine myself to the positive. That is working to

create productive Software Engineering Practices.

So in coming days, I shall be publishing draft versions of my chapters on "Applied Software Engineering".....

....Comments are invited on the subject.

Looking forward to have meaningful dialog!!

I must say that I find your style of writing exceedingly hard to follow. It showers one with a stream of details without first framing those details within a simple concept.

Let me illustrate what I mean. Imagine you went back in time and met the most brilliant man of his time - Leonardo de Vinci. On seeing him in a wagon, you mention that in the future they will have cars. He asks you what a car is. You explain: Well, we have distributors sending a boosted voltage to the spark plugs, which ignites the gasoline. Later designs use electronic ignition which helps fuel consumption. Once the engine is running, we let out the clutch........ Leonardo, despite being a brilliant man, has no idea of what you are on about. Fortunately you brought a friend with you, who although he has limited technical knowledge, was able to help. He says to Leonardo: "Leonardo, a car is like a wagon. It has four wheels, and you sit in it. The difference is that there are no horses in front pulling it. Instead, they have invented a machine called a motor, which replaces the horses. This machine is controlled by the person in the wagon by regulating the fire burning in the machine with the amount of fuel fed to the motor."

Notice that your friend starts with not only the most basic underlying concept, but also uses ideas that are familiar to the listener. Once the basics are grasped, one can then expand. There is no point in talking of spark plugs if the idea of electricity isn't understood.

You asked for comments, so my comment is to consider your style of presentation. Don't make your listeners work to understand you; make it easy for them. When kids asked me how an aeroplane flew, I didn't toss Bernoulli's theorem at them. I said that it basically pushed against air, by sending the air downwards, and if he stood under a helicopter he would feel the air come down.

This is meant as constructive criticism, so please take it that way.

Share this post


Link to post
Share on other sites

APPLIED SOFTWARE ENGINEERING

INTRODUCTION

Any Engineering discipline ought to answer a HOW?

Applying sciences, while I can build a model of Engine or Car in my Lab,

'How' can I economically mass produce these?

Similarly, while I can write fancy programs on my desktop.

How can my knowledge be extended to make it viable in various fields?

To answer that question comprehensively, the software development cycle is split into different phases.

Requirements gathering, Requirements analysis, Design, Code development, version control, Module Testing, Software

building, Integration and System Testing, Acceptance Testing, Piloting and system maintenance.

Before I interpret these phases using Objectivist Epistemology, I think we need to get introduced to a few intermediate principles.

These can then act as a bridge connecting very Abstract Epistemological principles, and concretes used in Software Development Phases.

NEED FOR A PARADIGM

The purpose of paradigms is to define fundamental rules that can simplify and improve the process of programming.

Essential fact that gives rise to the necessity of rules which guide thought and action, it is the application of

law of identity to consciousness.

Consciousness, like any physical entity is FINITE, has certain attributes and behaves in a certain way. "Faculty of

awareness" being the distinguishing attribute.

Since consciousness is finite, therefore it can hold only a few units in its "direct" awareness at any instant of

time. The behavior that Ayn Rand described as Crow Epistemology.

While exact number of units may not be known; based on analysis of objects we "perceive" in a single instant - count

cannot go into double digits. As a rule of thumb,I will take it as 4-7.

PROCEDURAL PROGRAMMING PARADIGM

A task of reasonable complexity though, can easily involve scores of steps to be successfully accomplished.

Therefore we split the task into sub-tasks, and can further split these into smaller tasks.

The count and magnitude(of smaller tasks) of course governed by the number of units individual can deal with, at any

given time.

The smallest unit of task can vary from person to person. If you are accomplished and task is routine, it may not go

very deep.For novice, the same task needs to be further split.

This principle of breaking a problem into smaller problems, solving them individually, and then integrating it into

a whole is known as MODULARITY.

And modularity forms the essence of Procedural Programming Paradigm.

In procedural programming, depending upon the complexity of application and the nature of modules. The application

can be split into separate functions, across different files, libraries and even processes.

Procedural paradigm acted as a precursor to Object Oriented Paradigm, which I will demonstrate later as having

similarities to Principles of Objectivist Epistemology.

Share this post


Link to post
Share on other sites

OBJECT ORIENTED PARADIGM

A real world example of Procedural approach can be a cooking recipe.

Here we split into various cooking actions - heating oil, adding spices, adding various items and corresponding

delays, plus a few preparatory steps.

However, same technique cannot be directly used while preparing for a party of say 500 guests.

Before we begin preparations, we need to organize many other things.

Whether quantity of materials, vessels and cylinders is sufficient?

And only after considering these and other aspects, do we come to the stage of applying the recipe.

While at a very broad level, this may still be categorized as modularity of tasks.

However, to limit the scope and have better understanding, we come up with new principles in Object Oriented

Paradigm.

PRINCIPLES OF OBJECT ORIENTED PARADIGM

With the increase in computing popularity, memory, speed - the complexity of applications started multiplying.

And modularity(in its current form of splitting into functions), was no longer found to be a sufficient cognitive

tool. Therefore Object Oriented Principles were defined.

.....And these principles I shall be elaborating in the coming days!

Share this post


Link to post
Share on other sites
To the Best of my knowledge, owing to Intellectual bankruptcy of humanities in universities, Pragmatism is by far the most dominant trend in today's Big

Businesses.

May it be most notorious one's like worldcom, and their fabrication of books to get over CURRENT tide. OR huge bonuses being allocated to top and

middle management based on incompletely achieved objectives.

Software Industry is no exception. And the Philosophy has been formalized by Software Engineering Practice called Agile.

While specific computing subjects like Datastructures, Operating Systems, Database, Networking are very well defined.Starting with Agile manifesto in 2001, the books that followed, and the corresponding seminars; today Agile practices are dominant in the Industry.

Besides 4 years Industry experience, my sources for Agile include couple of seminars by firms like agile developer and good agile. An extended argument with the trainer of former and readings of initial chapters of his book.

Though "Pragmatic Programmer" is supposed to be the key book(maybe equivalent of Critique of Pure reason in Philosophy), but I think even without its read, I have sufficient knowledge to grasp the dangers.

Having gone through major Objectivist works and aynrandlexicon on Pragmatism. The consolidated information I think offers me much deeper understanding of the subject, than those who are otherwise better read and more experienced.

While Critique of Agile practices in Software Engineering can be considered as a valid topic; here I confine myself to the positive. That is working to

create productive Software Engineering Practices.

So in coming days, I shall be publishing draft versions of my chapters on "Applied Software Engineering".....

....Comments are invited on the subject.

Looking forward to have meaningful dialog!!

I must say that I find your style of writing exceedingly hard to follow. It showers one with a stream of details without first framing those details within a simple concept.

Let me illustrate what I mean. Imagine you went back in time and met the most brilliant man of his time - Leonardo de Vinci. On seeing him in a wagon, you mention that in the future they will have cars. He asks you what a car is. You explain: Well, we have distributors sending a boosted voltage to the spark plugs, which ignites the gasoline. Later designs use electronic ignition which helps fuel consumption. Once the engine is running, we let out the clutch........ Leonardo, despite being a brilliant man, has no idea of what you are on about. Fortunately you brought a friend with you, who although he has limited technical knowledge, was able to help. He says to Leonardo: "Leonardo, a car is like a wagon. It has four wheels, and you sit in it. The difference is that there are no horses in front pulling it. Instead, they have invented a machine called a motor, which replaces the horses. This machine is controlled by the person in the wagon by regulating the fire burning in the machine with the amount of fuel fed to the motor."

Notice that your friend starts with not only the most basic underlying concept, but also uses ideas that are familiar to the listener. Once the basics are grasped, one can then expand. There is no point in talking of spark plugs if the idea of electricity isn't understood.

You asked for comments, so my comment is to consider your style of presentation. Don't make your listeners work to understand you; make it easy for them. When kids asked me how an aeroplane flew, I didn't toss Bernoulli's theorem at them. I said that it basically pushed against air, by sending the air downwards, and if he stood under a helicopter he would feel the air come down.

This is meant as constructive criticism, so please take it that way.

Thanks for sharing the perspective.

While the portion should be sufficient for those versed with Philosophy or software sector.

In light of your reply, looks like it was somewhat abstract for regular audience.

So I will try and address the issue.

Later post I think provides more relatable scenarios, and then builds over them.

Let me also take this opportunity to define the range of prospective audience.

While the core-group consists of people in Software Industry. I also intend to address regular users of technology.

Industry direction as I see it, I think these users can contribute to Requirements gathering(maybe analysis) and

piloting phases.(In case these terms seem alien, I shall be taking them up in coming posts).

Also, I hope these writings will provide further inductive inputs to people interested in Philosophy or other

Engineering disciplines....!

Share this post


Link to post
Share on other sites

PRINCIPLES OF OBJECT ORIENTED PARADIGM

Continuing the e.g. of cooking.

When you are cooking for 1-2 people. The central focus is on dividing the cooking tasks into manageable units (MODULARITY).

The ingredients, while important are secondary.

On the other hand, the approach and setup reverses when you cook for 500 people.

Recipe dealing with actual cooking steps comes into picture later.

The main focus is on ingredients, vessel capacity, cylinders and such things.

Therefore we can say that the major change in scale of problem, creates the need for better approach.

Similarly, the need for Object Oriented Paradigm was created in software.

The ingredients, vessels and cylinders here are OBJECTS. These have to be carefully procured and set for successfully preparing food of 500 people.

This brings me to the definition of an Object

OBJECT: An entity that has attributes, behavior and identity.

The entity can be a perceptual thing like a table or a mobile phone.

OR it can be conceptual like law or marriage, that is hierarchical integration of many things(starting from percepts).

In mobile phone for e.g,

Attributes can be model, keypad, view screen, phone number, IMEI number(unique number for every handset manufactured),

Battery level and Billing amount.

State is the set of values of attributes.(Model:Nokia1100, Keypad:QWERTY,phone no: 91XXX, IMEI:1234YYY).

Behavior is the set of actions that can change the state of an object.

"Calling" can change Battery level and billing amount of phone.

"Charging" can change the battery level and so on.

Identity is the set of attributes that can uniquely identify the Object.

Phone number or IMEI number can uniquely identify a mobile phone.

ABSTRACTION: An Object can have large number of attributes, behaviors and also identities(depending on the context).

How do we decide whats important and whats not?

This brings us to the principle of Abstraction.

Abstraction means selecting relevant details of an Object, and omitting the rest.

Relevancy depending on the nature of the problem to be solved.

Traffic lights for e.g.

As a driver, I am only interested in the color of the signal and change in signal.

Whats the voltage or current source or Electricity company is irrelevant to me.

ENCAPSULATION: Abstraction is important for understanding user requirements, and what he considers necessary for

action.

But Engineering also involves implementation based on the understanding.

And abstraction, though crucial, is not sufficient for complete functioning of an Object.

So how should abstraction and implementation be related?

This is defined by the principle of encapsulation.

Lets say we are living in a civilization with only diesel engines.

So abstraction of a car would involve features like accelerator, break, clutch and gears.

And implementation would be the 4-stroke mechanism for diesel engine.

Now suppose, a scientist comes up with a petrol engine - which is more efficient and less polluting.

So in the new car, while there is no need to change accelerating or breaking method(as it would be difficult for

users to change their driving habits).

We keep the abstractions, while modifying the implementation into petrol engine.

The abstractions like accelerator,break - that connect user to actual implementation are termed as "interfaces".

So encapsulation can be defined as the separation of abstraction from implementation.

Encapsulation necessarily consists of a boundary like packaging(Engine in this case), to which user interfaces

connect.

COMING UP LATER: POLYMORPHISM, INHERITANCE.

Share this post


Link to post
Share on other sites

PRINCIPLES OF OBJECT ORIENTED PARADIGM(OOP) Contd...

POLYMORPHISM: If we look again into the example of encapsulation, we can notice that same abstraction(like accelerator,break) is implemented in many ways(diesel or petrol engine). This reflects the concept of polymorphism.

When the same abstraction is(or can be) implemented in more than 1 way, principle is known as polymorphism.

Etymological meaning: Many forms

In implementing the abstraction of Choice in browser for e.g. It can be a drop box(as in a URL window), Radio Button(as in answer selection box of multiple choice questions), or checkbox (as in square box options while filling form).

INHERITANCE: Any new implementation necessarily takes some aspects from previous one.

So how does the process of creating new software from previous one follow?

Based on the usage of old application and the relevant knowledge of the subject. We define abstractions and encapsulations of possible polymorphic forms. And then form the new object combining old abstractions and new implementations.

(Just as we did in the case of implementing petrol car from the abstractions of diesel car, and implementation of petrol engine).

Inheritance is the process of deriving new implementation from existing abstractions.

Lets look at an evolution in technology.

In earlier mobile phones, the contacts information could be transferred from 1 phone to another using Infra red frequency spectrum(thats still used in television remotes).

On top of this spectrum, an application was developed.

So possible abstractions can be to get the list of devices in the vicinity, select the device and transfer the data.

Implementation would involve a driver application that connects to Infrared port, enables or disables it, sends bits and bytes over the channel using infra-red protocol(set of rules for communication).

Later when Bluetooth technology spectrum was discovered.It was possible to use same abstractions, but implementation that involves Bluetooth port, corresponding frequency channel and protocol.

So here we are implementing contacts synchronization using bluetooth, from abstractions of Contacts synchronization of Infrared.

Advantage being that we can save the effort of defining abstractions for Bluetooth application, and instead focus on implementation.

COMING UP: PERSISTENCE PRINCIPLE OF OOP, AND ELABORATION OF SOFTWARE ENGINEERING LIFE CYCLES

Share this post


Link to post
Share on other sites

Thank you for these posts, Rohin. I am studying these very things at the moment.

Going back to your appraisal of agile development, it has been presented to me as a method of repeatedly going through the (formerly exclusively) initial steps of requirement assessment, design and so on, so as to adapt to changing demands and discoveries made during development. Is this a fair representation of "agile", or is it simply a description of iterative development?

Looking forward to your upcoming posts.

Share this post


Link to post
Share on other sites
Thank you for these posts, Rohin. I am studying these very things at the moment.

Going back to your appraisal of agile development, it has been presented to me as a method of repeatedly going through the (formerly exclusively) initial steps of requirement assessment, design and so on, so as to adapt to changing demands and discoveries made during development. Is this a fair representation of "agile", or is it simply a description of iterative development?

Looking forward to your upcoming posts.

Good to know that posts are reaching receptive minds.

Iterative approach for software is necessitated by the nature of Software and nature of human consciousness.

Its easier to modify software compared to machines, electrical equipment, or buildings.

(Though not as easy as Agile trainers would make it sound).

Plus Human mind works best when it isolates the necessary aspects of the problem or requirement, from less relevant.

Agile, though offering iterative approach, because of its Pragmatic foundations was found and seen to be dangerous.

As indicated in the first post, comprehensive critique of Agile would require more systematic study of key books.

However, here are a few observations:

- It undermines long-term planning and/through contractual obligations.(See Agile Manifesto)

(And like a classic pragmatist, never actually dismisses them).

- It discourages abstract fore-thought and comprehensive design, even by architects.

- It undermines individuals who are prime-movers by disparaging individual evaluation and focusing on product

evaluation.

- Promoting automation as an out-of-context dogma, irrespective of the nature of platform and project.

- In general, it promotes a culture where any kind of work which does not DIRECTLY CONTRIBUTE to the product(like

evaluation, auditing, tracking) is frowned upon and avoided.(Though these are necessary for the overall interest of

individuals and also product).

Objectively speaking, Agile is the least bad of all the software processes known to me.

But to be of any value, one should carefully and critically accept or reject its attributes.

A whole is a meaningless mesh.(As is the case with any pragmatic work).

Intriguing as the subject is, it wont be possible for me to comment further in near future.

Share this post


Link to post
Share on other sites

In mobile phone object, Battery level and billing amount are valid for particular time interval, till they are

changed by some operation of the phone.

The model, keypad, view screen attributes on the other hand are normally same throughout.

Attributes of the object that remain same throughout are called static attributes.

Attributes of the object that change over a period of time are called dynamic attributes.

PERSISTENCE: Till now we looked into the attributes of the object that can be described at any particular instant

of time. And behavior that results in the change to these attributes.

But lets step back and ask another how.

How do these attributes become the part of object?

Or more specifically, how does the object evolve?

How should we view various stages in development of the object?

Should we or should we not save the intermediate object states permanently?

If so, why? And what is the criteria of selection?

Principle of persistence enables permanent storage of the object, and also of necessary states during its lifetime.

Lets say I takeover the maintenance of google chat application like gtalk.

Its a million line code, and there is a thousand page requirements and design document.

If however, I have access to the evolution of application. For e.g.

- First networking component was developed, that connected client and server.

- Then SUBSET OF protocol like XMPP was implemented, which enabled two different machines in internet to chat using command

line.(Command line is shell window in Linux and DOS prompt).

- Then login and encryption components were developed.

- Followed by the implementation of multiple users and friends concept. Information being stored in central server

database.

- Then various types of visibilities like offline, online available, online busy, invisible etc. were developed.

- And finally the UI that used these services in presentable format.

This information will greatly ease my learning effort (through documents, code and hands on). As now I will get an

idea about which component depends on what, without even looking at the code or design document.

(Understanding of basic requirements is necessary before we look into the evolution of application).

(In the above example, whole gtalk application was taken as a single object. As is also the case in the beginning of

project).

Share this post


Link to post
Share on other sites

Ten days into this thread I see nothing explaining why this is claimed to be an example or application of Objectivist epistemology, in addition to the fact that the writing is hard to follow to the point that I suspect it is nearly incomprehensible to those not already familiar with the technology.

Share this post


Link to post
Share on other sites

REQUIREMENTS GATHERING

What are the fundamental things involved in the cause that gives rise to requirement?

Or, to ask it another way, when does requirement gathering actually begin?

There are in essence following things that are involved:

1. Existing system: Every requirement ought to presuppose the existing way of doing things.

These days its applications like browsers or word application.

30years back it was typewriter or primitive electronic one, which lead to initial version of computers having word processor programs.

We need to know about the existing system user uses, before we start thinking about the new one.

This brings me to the concepts of Productivity and "Use Case".

2. Productivity:

Productivity is the value a system offers to the user. Its the most fundamental abstraction any requirement can be reduced to.

Not just in software, but in any engineering field, unit of measuring productivity is the "time gained" in performing "an activity".

Taking example of productivity offered by the forum like this one.

I can broadcast a message to 100s of people within a day.

If suppose we were living in pre-digital age. I will have to send printed copies which can take at least a week to reach. Plus the money I have to spend can be equivalent to my day's earnings. Also adding the time I will spend at printing press, and aligning the content for fitting the paper.

Splitting time into 3 parts: Senders's time, Sender's cost and Transmission time.

For online posting on forum, Sender's time would be the time spent in composing the original message. In printed newsletter, extra time will be in preparing for print and actual printing time.

Cost here would be internet usage, electricity and negligible computing cost. Small fraction of my day's earnings. In printed letter, cost would involve print cost and cost of envelopes and stamps.

In forum receivers were able to view message within a day... Using printed letter it will be more than a week.

So we see, how internet and computer increase productivity of the sender from weeks to hours.

One might ask, how can we convert productivity of entertainment sets like playstation to time.

Well, exact conversion may not be possible with current knowledge, but time improvements can be indicated.

Here is how...

ASSUMING A RATIONAL CONSUMER, lets say he is mentally and physically exhausted after a week's work.

Yet he needs to do his housework in weekend. And due to lethargy, he completes it in 5hrs. But suppose a video game of 1/2hr reignites his goal directed behavior, and he finishes the same job in 3hrs. So here too the productivity improvement is 1.5 hrs.

Alternatively, he still finishes the work in 5hrs, but due to better motivation, quality of work is improved. Then too the time improvements are visible in the form of faster access to things he organized better.

3. Use Case: Method of using the system for doing productive work.

More specifically, it can be defined as an "abstraction(singular)" derived from using(or planned usage) of system in different ways.

The abstraction is the similar characteristic of different ways that we use the system in.

For mobile phone that we use for calling, texting, browsing, alarm setting and clock. Abstraction can be "Information Sharing and Gathering". Or even more abstractly, "Communication".

4. Fundamental value of Innovation: Innovation means taking existing system configuration and technology, and re-arranging these to form a new system. Taken in itself, it cannot be categorized as valuable. It ought to enhance the productivity of existing users to be of value. Therefore, productivity improvement becomes the end goal.

So fundamental value of the new system that derives(inherits) from existing system is the "new use case", that improves productivity in the form of better efficiency.

Let me try getting into the mind of Steve Jobs as he is thinking about IPhone.

Existing system will be the normal Blackberry or Motorolla phones that we can use for calling, texting or browsing.

And fundamental value of innovation will be the faster usage due to touch technology, he may have seen in ATMs or finger swipe access equipment.

Touch technology can thus be characterized as the "new use case".

Note: In future I will refer to the "fundamental value of Innovation" as the "new use case".

So Requirements gathering can be defined as the process of explicitly stating the set of new use cases, and their corresponding productivity improvements.

Share this post


Link to post
Share on other sites

One elaboration in Post#10

Plus Human mind works best when it isolates the necessary aspects of the problem or requirement, from less relevant.

Plus Human mind works best when it isolates the necessary aspects of the problem or requirement, and postpones less relevant into future.

Share this post


Link to post
Share on other sites

CORRELATING INTERMEDIATE PRINCIPLES TO EPISTEMOLOGICAL PRINCIPLES

Let me first make explicit 2 intermediate principles that were implicit in Requirements Gathering.

- Principle of Abstraction used to define a "use case", by observing and then abstracting from various uses of system.

- Principle of Inheritance used to form a "new use case", by deriving more productive use from existing system use

case.

Now lets further co-relate these intermediate principles to principles of Epistemology.

But before I co-relate, let me briefly introduce those wider Epistemological principles.

Deduction is the process of arriving at a new relationship, using existing principle and an observed fact.

Taking the most popular example:

Principle: All men are mortal.

Observation: Socrates is a man.

Deductive relationship: Socrates is mortal.

One might ask, how did we arrive at the principle "All men are mortal".[1]

The fact that all men in history have died, ALONE cannot be used to form this principle.

There is still a possibility of superman somewhere, who may be existing forever.

But when we analyze the nature of life, and how process of aging at some point starts deteriorating organs of ALL

LIVING THINGS. This is when we can arrive at the principle - "All men are mortal".

The process of arriving at a concept or a principle by observing the subsuming concretes is known as "Principle Of

Induction".

Abstraction-Induction relationship:

When we use abstraction to get a use case, we observe similarities among particular uses of the system. In case of

mobile phone between calling, texting, browsing etc. to form use case mobile communication.

(The concepts of calling, texting themselves formed my observing similarities between particular calls or text

messages made at different times).

Therefore "principle of Abstraction" can be subsumed under the "principle of Induction".

(Induction also includes methods for qualitative and quantitative causal relationships, which wont be covered by

JUST Abstraction. But may involve some form of experimentation also. E.g co-relating time period of pendulum to

length).

Deduction-Inheritance relationship:

When we form a "new use case". We combine the use case of existing system(arrived at by inductive process),

principle of Productivity, and a particular from unrelated field.(like Steve Jobs combining touch technology of iPod

with regular mobile phone use case).

Recall that we had labeled the method as inheritance.

Therefore, intermediate "principle of inheritance" can be subsumed under "principle of deduction".

[1]Dr. Peikoff's lectures on Art Of Thinking.

Share this post


Link to post
Share on other sites

In previous post, definition of "Principle of Induction" was a bit loose.

Refining it

The process of arriving at a concept or a principle by observing the subsuming concretes is known as "Principle Of

Induction".

The process of arriving at a concept or a principle by observing,analyzing and synthesizing the subsuming concretes is known as "Principle Of Induction".

Share this post


Link to post
Share on other sites

REQUIREMENT ANALYSIS

Now that we have the requirement from "Requirements Gathering" phase.

Questions that come to mind are:

How will it change the basic structure of existing system? How will it impact the Input and Output of system?

Is it possible to get approximate cost of implementing the new use case? If so, how?

These are the questions that we will answer in "Requirement Analysis".

Few more concepts that will be needed for Requirement Analysis are:

1. Existing System Model:

Its a representation of Existing System as an Object Model.

Being a model, we abstract those parts that are most important in existing system.

As it is necessary to know existing system in Requirements Gathering. Similarly, its necessary to model existing

system in Requirements Analysis.

So we represent system as a model. Identifying components that are most important in the Existing System.

Again using the process of Abstraction.

Depending on the complexity of model, we can use simple block diagrams TO a set of formal diagrams using software

modeling languages like UML.

(Details of such languages are beyond the scope).

2. Final System Model: Integrating existing system model and new use case, we get a final system model.

Essentially, it is the variation of existing system model, where the execution of new use case is made possible.

3. Variant Model:

The delta(difference) of Final System model and Existing System model constitutes Variant model.

It contains components that are added or modified in the Final System Model. And components that are deleted from

existing system model.

4. Legacy components:

The components that remain same in existing and final system are known as legacy components.

To concretize terms Existing system model, Final system model, Variant model, and legacy components; let me

illustrate using a real life project of a smart phone application.

REQUIREMENT ANALYSIS OF A SMART PHONE APPLICATION

Brief Introduction:

The application synchronizes contacts, calendar items(reminders,to dos, etc.) and notes in your mobile phone, by

copying them to your remote server internet account(like account in this forum). Therefore, it can

serve as your backup when the phone is lost. An online reference while browsing. OR it can also be extended to

connect your phone data to your facebook friends, online calendar events or status notes.

It uses an open protocol "syncml" over the internet services, provided by the wireless network.(Similar to protocol

like html over http, used by the Internet browsers).

The application is called "Data Synchronization".

Existing System Description:

Just as in web-email application like yahoo mail or gmail - you need web address, login name, and login password.

Similarly, in "Data Synchronization" application, you need to set remote web server address, your login name and

your login password.

When you open the application in your mobile phone, you get "Synchronization" option in the list.

On clicking "Synchronization" option, copying of data begins. And if the network and server are available, phone data

is copied to server.

New Use Case:

In existing "Data Synchronization" application, you need to manually start synchronization.

"New use case" is to start synchronization as soon as "Contacts", "Calendar" OR "Notes" are changed in the phone.

Productivity improvement being that user does not need to manually start synchronization, it automatically gets

started when the data is added/deleted or modified in phone.

Existing System Model:

Essentially, the manual "Data Synchronization" application consists of 2 parts - Graphical User Interface(GUI) and

Engine Component.

Just like accelerator/break in car; user interface component is responsible for the visible parts of the system like

"Synchronization" option in the list, and "Settings" option(For setting server address, uname and passwd).

The Engine component is the one that 1. Waits for"Synchronization" to be pressed in UI.

2. Gets data from Contacts, Calendar and Notes Databases.

3. Opens Internet connection to the remote server using web address, user name and password.

4. Sends data.

6. Closes the internet connection.

1-5 can collectively be called as the implementation of "Synchronization Service" provided by the Engine.

So flow is something like - When we press Synchronization option, the UI calls Engine implementation using

"Synchronization Service".

And Engine then executes those 5 steps.

reqex.jpg

Final System Model:

Recall that final system model is for the "new use case", that is automatic synchronization.

Besides GUI and Engine, listener component is added.

The flow is something like:

1. As soon as the user changes "Contacts", "Calendar" or "Notes" information in phone; the listener component is

notified.

2. On receiving notification it calls synchronization service.(Same service that is called by UI in existing

system).

reqfi.jpg

Variant Component:

So in the final system model, UI and Engine remain same.

The delta consists of 1 added component, that is listener.

Legacy Components:

GUI and Engine here comprise Legacy components.

Summarizing "Requirement Analysis" discussion so far:

- "Existing System Model" is the object model, abstracting important components of Existing system.

- "Final System Model" is the object model of Existing system variant, which can execute "new use case".

- "Variant Model" contains components that are added/deleted or modified in the Final Model.

- "Legacy Components" remain unchanged in Existing and Final model.

"Requirement Analysis" involves explicitly stating the changes "new use case" caused in existing system model, and

what will be the approximate cost of implementing the use case.

While we now know how to calculate important "structural changes" by using system models.

In the coming posts I will be discussing Implementation Cost Calculation.

Share this post


Link to post
Share on other sites

SKILL PROFILING

In every phase of software development, whether its time taken in that phase or quality of work, the fundamental

determining factor is the skill of people who work(Though not the only factor).So before we look into implementation cost estimation during

Requirement analysis. Lets first look at how to profile the skills of individual(s).

Skills can be divided into 3 parts:-

1. Basic Skills :- For developing the system, one should have some idea about the particular category of programming

languages, and how to work with these. For testing, knowledge of basic approach one needs to take in testing is

necessary.

If for e.g., the system is in JAVA(a high level language). And individual has the knowledge of C++ programming

(another high level language). Then we can say that he has basic skills, as the category of the two is same.

But if he knows about the hardware and only assembly languages, but no high level language, then individual is not

having Basic Skills.

For profiling testing skills, if an individual knows about basic testing steps that can be applied to any system -

then he has Basic testing skills. But if he only has intuitive view of testing, but not principles - Basic skills are

lacking.

Similar skill profiling can be done for other phases like designing, version control etc.

So in basic skill profiling, one needs to be proficient in capabilities general category requires.

2. Major Skills :- The scope narrows down in major skill category. Here knowledge of the platform on which system is

built(Java, C, Linux, Windows, Objective-c, Mac), and the domain knowledge(Networking domain, Telecoms domain,

Banking Domain, Embedded systems etc.) is necessary.

Major skill requires knowledge of specific platform and domain.

While Major and Basic Skill profiling is taken care in Hiring and training, before assigning the individual. In

minor skills the scope further narrows down.

The minor skill category, therefore needs to be upgraded while the project is ongoing OR just before.

3. Minor Skills :- Due to relatively few categories, platforms and domains, individual can be formally trained in major and

basic skills.

There is still some gap though. That includes knowledge of very specific technologies used in the existing system OR

variant system(components corresponding to variant model in final system).

For Auto Data Sync component, lets say its developed in Google-Android.

Basic skills include High Level language training and testing training.

Major Skills include Java on Android platform for development,and Embedded system

domain for testing.

The minor skill will include knowledge of listening mechanism in Contacts,Calendar,Notes; and

method to access Synchronization service of Engine. Tester here should be able to generate scenarios most often used,

OR most likely to halt the system.

(The skill list is not exhaustive in the example. For design it can include design patterns, and also domain knowledge etc.).

So minor skills include knowledge of technologies specific to existing system or variant system.

While estimating time in coming posts, I assume that people are trained in Basic & Major skills.

And gap of only minor skills will be considered.

Share this post


Link to post
Share on other sites

Correction

1. Basic Skills :- For developing the system, one should have some idea about the particular category of programming

languages,

1. Basic Skills :- For developing the system, one should have some idea about the general category of programming

languages,

Share this post


Link to post
Share on other sites

REQUIREMENT ANALYSIS CONTD...

REQUIREMENT COST ANALYSIS

Estimating the cost in any Engineering discipline is by far the most important, and most difficult task.

For it involves projecting something that is in future.

The guideline for estimation we have is experience. Experience that is our own, or somebody else's.

And of all the disciplines, Software Engineering is the youngest.

So chances of going wrong are more in this sector. No wonder its a nightmare.

Just because this involves experience, doesn't mean that we can estimate cost based on intuition or whim, even if the person happens to be an expert.

Greater the scale of project, greater are the chances that cost projection can go wrong if done emotionally.

Unfortunately though, Agile not only encourages such methods. But is also skeptical about the possibility of having accurate scientific methods for estimating costs.

We shall be looking into such methods in this post.

Before I go into the details of cost calculation, lets look into the items involved in the cost analysis.

(1) Time: The time it will take from Requirement Gathering to finally deploying the implementation.

Henceforth, I will call it effort estimation.

(2) Hardware and Software Cost: Hardware and software that will be required in various phases.

(3) Unit Cost: (1) and (2) comprise fixed cost. That is cost which is independent of the number of application copies or products manufactured.

Unit cost consists of the additional cost, that is proportional to the number of units manufactured or number of systems in which the application is installed.

Now lets see how these costs can be calculated.

Recall how we arrived at this stage.

- We first got the "new use case" by integrating particular from another system(like touch screen or automatic notifications), to existing system(mobile phone or manual data synchronization).

- This information was then used to develop "Variant Model" in analysis phase I.

"Variant Model" is what will be most critical in Cost Estimations.

Now moving ahead to the details of Cost Estimation:

1. Effort Estimation: In most cases we record the time we spent in various software development phases.

So I assume same thing was done for another system, from which we got Idea for the Requirement(Touch Ipod or other automatic mobile phone applications).

Based on that information, we can approximate the "total development time" as the sum of "development time for each component" added/modified in "Variant Model".

Total Development time of each component forms most, BUT NOT ALL of the effort estimation.

A simple observation of novice and the expert carpenter will tell us the crucial role skill plays in predicting the development time. And so comes "skill profiling" into the picture.

As mentioned in previous post, Basic and major skills are assumed. Here is how minor skill "upgrade time" can be predicted.

For each component in the variant model, we check the corresponding minor skills that are needed. And again from historical data of these, import the minor skill upgrade time.

Generally though, time needed for each minor skill upgrade should not be much different. So individual can reuse time he needed for upgrading SOME OTHER minor skill for same platform and domain, IF HISTORICAL DATA IS UNAVAILABLE.

Continuing the example of "Automatic Data Synchronization" application. The variant model contains a Listener component, which is notified every time Contacts, Calendar or Notes information gets changed.

What we also observe is that when we add Contacts, Calendar or Notes in phone - The respective user interface immediately gets updated.

So we can safely assume the existence of listening mechanism in each of the 3 systems GUI.

- If historical data for each of the three listening systems is there. Then we simply import the actual development time for each system into our estimation.

- If historical data for any 1 system is there. But we know that all 3 use similar method for notification. Then we can multiply the effort estimate by 3.

- If for some reason Historical development data isn't available, we can get more approximate time estimate by comparing the size of variant in another system to the size of existing system. And based on the ratio of size, we get the estimate for developing the variant.

For listener component, lets say its 20% of existing system.

Then we can approximate estimation to 20% of the time it took to develop existing system.

- Further, we know that besides listening to changes, listener component uses synchronization service.

The usage of service is similar to usage by GUI.

So we can further add the time from Historical record of GUI development.

- In the similar lines we can add the "skill upgrade" data from variant component historical records. OR reuse individual's own data from previous minor skill upgrade.

Summarizing the estimation steps in a more general form:

- The total estimation is the sum of estimation of variant model components and minor skill upgrades.

- For each component object in variant model, we enlist the set of actions(behavior) that Object can take.

(Listen to changes in contacts, calendar and Notes. Then call Synchronization Service).

- For each action, we look for Historical record in another system(For Listeners) or same system(Synchronization service).

- If Historical record is unavailable, we use comparative size ratio of variant component to existing system for getting the estimate.

- Skill upgrade time is either taken from each variant component's "skill upgrades" history. If unavailable, its equated to individual's own historical record for some other minor skill.

2. Hardware and Software Cost: Apart from time we will spend in building final system, cost is also incurred by the hardware and software used for development. Unless variant model contains new hardware(like touch screen) or new software(like new database system or libraries or drivers) - the hardware and software is mostly what was spent on existing system.

For getting existing system hardware / software cost, we look into existing system historical records. And for getting cost of new hardware / software we look at the market prices or historical records of corresponding systems.

In developing Listener component of "Data Synchronization", for hardware we will need 1 mobile phone to test the application. And since the software code is written and compiled in desktop, we will also need data cable for transferring the application code.

Software resources will be the software kit for development and remote server account for testing. Both can be known from historical record of existing system.

3. Unit Costs: Once final system is deployed, it will necessarily consume some more resources for every unit its deployed into.

While there can be many resources that are consumed by the requirement. Again, applying the principle of Abstraction, we need to focus on those that are most relevant. And omit those whose cost is negligible.

Such Resources can be categorized as:

Physical Resources: Like RAM, CPU time, Hard Disk Space, touch screen.

Essentially the additional hardware space and time.

(If Productivity improvement involves cutting down on hardware usage, then of course this will reduce).

Services: Time for supplying, installing and downloading the new system.Additional service charges, screen space or menu space. And effort required in supporting and maintaining new use case.

Essentially, its the extra time and money spent/unit by the producer and consumer due to the new requirement.

As in any trade transaction, "unit cost" should be significantly less than the "value of innovation" we got in "requirements gathering". If not, now is the time to drop the requirement.

In our example Data Sync application,

Unit cost in hardware will be extra RAM, battery life and storage, the listener component will consume(3 key resources of any mobile phone domain). Again, this can be inferred from the analysis of corresponding implementations of variant model components.

Unit Cost of services will be relatively more internet usage of phone, and time needed to download and install the new application.

To sum up, "Requirement Cost Analysis" includes estimation of development time, minor skill upgrade time, hardware and software needed for development, and unit cost that will be incurred by the new use case.

P.S: There was repeated mention of Historical records in this post.

In case you are unfamiliar with the term in the context of Software Engineering, details will be given later.

Coming up: Design and Coding phases.

Share this post


Link to post
Share on other sites

CO-RELATING REQUIREMENT ANALYSIS

TO

INTERMEDIATE PRINCIPLES OF OBJECT ORIENTED PARADIGM(OOP)

In Requirement Gathering phase, we abstracted functions of existing system into use case.(Texting,Calling,Browsing as mobile communication).

Was there any abstraction involved in Requirement Analysis? Lets See

ABSTRACTION IN REQUIREMENT ANALYSIS

While in Requirement Gathering, functionality of system was abstracted as the use case. In Requirement Analysis, we abstracted structure of the system. That is, we selected relevant aspects of existing system, and represented it as a Existing system model. Similarly, we abstracted Final System to Final System Model.

Therefore, "process of modelling" can be subsumed under the principle of Abstraction.

ENCAPSULATION IN REQUIREMENT ANALYSIS

Recalling Encapsulation

The abstractions like accelerator,break - that connect user to actual implementation are termed as "interfaces".

So encapsulation can be defined as the separation of abstraction from implementation.

Encapsulation necessarily consists of a boundary like packaging(Engine in this case), to which user interfaces

connect.

To put it more succintly, Separation of Abstraction from Implementation; and connectivity between the two achieved through interfaces.

In Existing system model of Data Sync Application, GUI represented the component that holds abstractions of Manual Synchronization. These were represented as lists containing Synchronization and settings buttons.

The Engine portion was the actual implementation of process that copied the phone data to remote server.

Therefore, representation of the system as a GUI and Engine can be categorized as Encapsulation.

So while process of modelling, an action, is abstraction. The model itself is an Encapsulation.

As in car, accelerating and breaking, that connected driver to engine were interfaces.

Similarly, Synchronization service, connecting GUI(and also Listening component) to the Engine is an interface.

So the model representing aspects of system abstraction and implementation separately, can be subsumed under the principle of Encapsulation.

POLYMORPHISM IN REQUIREMENT ANALYSIS

Recall that while doing Effort Estimation in Requirement Analysis, we were observing existing implementation of listening mechanism in Contacts, Calendar and Notes UI. And based on the information got - the development time or size - we were able to Estimate the time needed to implement Listener Component in "Automatic Data Synchronization".

Similarly, we got skill upgrade time for Listening component through existing system implementation history.

The reason we were able to do that was, because the existing implementation and projected implementation were different forms of same abstraction. The abstraction was represented as Listening component in Variant Model.

And now recall the definition of Polymorphism

"When the same abstraction is(or can be) implemented in more than 1 way, principle is known as polymorphism."

Therefore, both aspects of Effort estimation in Requirement analysis, are application of the Principle of Polymorphism. That is, whenever we need to estimate time or skills for a particular component in variant model, we need to find information from existing implementations of "similar components".And abstraction represented in variant component is the similarity we look for.

Therefore, Effort estimation is the application of the principle of Polymorphism.

Share this post


Link to post
Share on other sites

Corrections

Similarly, we abstracted Final System to Final System Model.

Final system was actually inherited(deduced) from Existing system model and new use case.

So while process of modeling, an action, is abstraction. The model itself is an Encapsulation.

Process of modeling can be inheritance also, as in the case of deriving final system model.

Share this post


Link to post
Share on other sites

Further correction

Corrections
Similarly, we abstracted Final System to Final System Model.

Final system was actually inherited(deduced) from Existing system model and new use case.

So while process of modeling, an action, is abstraction. The model itself is an Encapsulation.

Process of modeling can be inheritance also, as in the case of deriving final system model.

Therefore, "process of modelling" can be subsumed under the principle of Abstraction OR Inheritence.

Share this post


Link to post
Share on other sites

SYSTEM ATTRIBUTE CONCEPT EXPANDED

So far, when we say that something is an attribute of product or a system; imagery that comes to mind is that

fundamental cause of attribute must necessarily be physically part of entity.

"Apple is red". The fundamental cause of redness lies in the apple.(Part of the cause of attribute is also the

nature of our sense of sight).

Similarly, the attributes we abstracted in our examples of object - like model, phone no, color etc, could be got by

observing or using the phone.

Now lets go back and see how the phone attained its current state.

It was manufactured in particular factory, transported through supply chain and bought at a retail store. If we

examine the process, we can get some more information about the phone.

It was transported by Western Railways, it took 5 days to reach retail store, it was sold at a discount of 10% and so on. Can this "detached" information be described as the attribute of phone object?

My answer is YES. Since attribute is something that "describes" an aspect of the object. Therefore, properties of the

"process that led to the object" are attributes of an object.

HISTORICAL RECORD AS THE SYSTEM ATTRIBUTE

In requirement analysis, "Historical Record" of existing system or related systems was used for Effort Estimation,

hardware/software cost and unit costs.

So at this point one can ask, how is the Historical Record related to the system.

Since any system can be represented as an object. And as we saw in previous section, the properties of process that

leads to object are also object attributes. Therefore, information in Historical Record - time of development, skill

upgrade time, h/w s/w used in development - is also an attribute of the system. And therefore Epistemological

principles of Induction, deduction and intermediate principles of Object Oriented paradigm are also applicable to

Historical record.

As an illustration, here is how these principles were applied in Requirement Analysis to Historical Record.

By principle of polymorphism, we had already established Existing implementation corresponding to Variant model.

(Listening mechanism in Contacts, Calendar UI). Since implementation time and skills needed for development are

also attributes of existing implementation. And these attributes were few of the main causes. Therefore, we could

predict that same attributes, and in "approximately" same quantity will exist in another polymorphic form. And so we

could predict Effort Estimation for Variant Model implementation of Final system.

Similarly, hardware and software used for development is also an attribute of the system. And applying polymorphism

of Variant model with existing system, we could predict that same would be required for new system development as

well.

So to reiterate in conclusion, Historical Records of the system development are also system attribute. And therefore

Epistemological principles applicable to physically connected system attributes are as applicable.

Share this post


Link to post
Share on other sites

FURTHER CO-RELATION

OF

EPISTEMOLOGICAL AND INTERMEDIATE PRINCIPLES

So far we have induced that Abstraction is subsumed under Induction, and Inheritance under Deduction.

Lets further classify principles applied in Previous posts

CORRELATING POLYMORHISM

When the same abstraction is(or can be) implemented in more than 1 way, principle is known as polymorphism.

Etymological meaning: Many forms

Polymorphic arrangement consists of an abstraction as root, and its various forms through various implementations as leaves.

post-6680-1286178221.jpg

Abstraction can be a variant component like a listener model or Car Engine.

And concretes of the abstraction, like AutoSync/ContactsUI/CalendarUI as polymorphic forms of Listening component. OR petrol Car / Diesel Car for Abstraction Car Engine.

From concretes of collection, we can obtain the abstraction.

We mentally isolate the commonality between concretes. Listener component in case of autoSync and Contacts-CalendarUI. This commonality is abstraction, while the concretes are polymorphic forms from which the abstraction is obtained.

Therefore, Polymorphic arrangement can be obtained by the principle of induction.

Unlike Abstraction/Inheritance, which are subsumed by principles of Induction/Deduction.

Polymorphism is end result - the effect, while Induction(or subsumed Abstraction) is the cause.

CO-RELATING ENCAPSULATION

Encapsulation: separation of abstraction from implementation.

Recalling how we applied the principle of Encapsulation.

We had an existing system, and we modeled it for the purpose of Requirement Analysis. Or we obtained Final System model using Existing System model and new use case. The models were prepared such that the Abstraction(Synchronization in Manual UI or Listener Component) is separated from the Implementation(SyncEngine) using an Interface(Synchronization service).

In case of modeling Existing System, the process was purely Inductive, where Engine and UI were abstracted. In case of modeling Final System, it was deductive, where we took Existing system model and new use case to arrive at Final System Model.

Encapsulation thus is the end result - effect, while Induction(Abstraction) or Deduction(Inheritance) is the cause.

DIFFERENTIATING ENCAPSULATION AND POLYMORPHISM

If both Encapsulation and Polymorphism are the end results of Epistemological processes. How are they different from each other? Let me elaborate.

In polymorphism, we take an abstraction at root and various concretes CONTAINING the abstraction as leaves. Now if we take any one of the concrete forms, and separate out the abstraction(of polymorphism) from the implementation of that abstraction. As in case of Manual Synchronization, the abstraction was held by the GUI and implementation by the Sync Engine. The end result of analysis of ANY ONE concrete is encapsulation.

So in case of polymorphism, we are only concerned with the systems where the particular abstraction is present. But in encapsulation, we analyze further and obtain the relationship between abstraction and implementation in any system or system model.

Share this post


Link to post
Share on other sites