Cluster vs. Stratum


Cluster sampling and stratified sampling are two different sampling methods. The main difference between them is that a cluster is treated as sampling unit. Hence, in the first stage, analysis is done on a population of clusters. In stratified sampling, the elements within the strata are analysed.


Cluster Sampling
In this mode of sampling, the naturally occurring groups are selected for being included in the sample.
Its main use is in market research. In this method, the total population is divided into samples or groups after which, a sample of the groups is selected.
After this process, relevant and required data from all the elements of all the groups is collected.
At times, instead of collecting information from each group, information can be collected from a sub-sample of the elements.
If the variation is between the members of the groups and not between the actual groups, then this technique will work the best.
Before you start using this methods on clusters, make sure that the clusters are collectively exhaustive and mutually exclusive.
Stratified Sampling
In this technique, a sample is divided into stratum and on random basis.
Different stratum are created, which will allow the usage of different sampling percentage in each stratum.
These stratum are nothing but simple groups, which consists of a number of elements.
On these stratum, simple random selection is performed.
Make sure that every element is assigned only one stratum. This method is known to produce weighted mean whose variability is less than that of arithmetic mean of a simple random sample of the population.
Even in stratified sampling, the strata should be collectively exhaustive and mutually exclusive.
This will help in applying random or systematic sampling in each of the stratum. This will also help in the reduction of errors.
Cluster Vs. Stratified

Cluster Sampling

Application: It is used when natural groupings are evident in a statistical population.

Choice: It can be chosen if the group consists of homogeneous members.

Advantage: The method is cheaper as compared to the other methods.

Disadvantage: The main disadvantage is that it introduces higher errors.
Stratified Sampling

Application: In this method, the members are grouped into relatively homogeneous groups. This allows greater balancing of statistical power of tests.

Choice: It is a good option for heterogeneous members.

Advantages: This method ignores the irrelevant ones and focuses on the crucial sub populations. You can opt for different techniques. This also helps in improving the efficiency and accuracy of the estimation.

Disadvantage: It requires a choice of relevant stratification variables, which can be tough at times. When there are homogeneous subgroups, it is not very useful, and its implementation is expensive. If not provided with accurate information about the population, then an error may be introduced.

Quantiles

Quantiles are values taken at regular intervals from the inverse function of the cumulative distribution function (CDF) of a random variable. Dividing ordered data into q essentially equal-sized data subsets is the motivation for q-quantiles; the quantiles are the data values marking the boundaries between consecutive subsets. The quantiles can be used as cutoff values for grouped data in approximately equal size groups. Quantiles can also be applied to continuous data, providing a way to generalise rank statistics to continuous variables.

A kth q-quantile for a random variable is a value x such that the probability that the random variable will be less than x is at most k/q and the probability that the random variable will be greater than x is at most (q−k)/q = 1−(k/q). There are q−1 of the q-quantiles, one for each integer k satisfying 0 < k < q. In some cases the value of a quantile may not be uniquely determined, as can be the case for the median of a uniform probability distribution on a set of even size.

Absolute value function

Absolute Value Function The absolute value of a real number x, |x|, is ?x if x≥0
|x|= −x if x<0



|2| = 2, |−2| = −(−2) = 2

The absolute value function is used to measure the distance between two numbers. Thus, the distance between x and 0 is |x − 0| = |x|, and the distance between x and y is |x − y|. Thus, the distance from −2 to −4 is |−2−(−4)| = |−2+4| = |2| = 2, and the distance from −2 to 5 is |−2 − 5| = |−7| = 7. 

Bernoulli Process

A Bernoulli process is a sequence of Bernoulli trials in which:
  • the trials are independent of each other,
  • there are only two possible outcomes for each trial, arbitrarily labeled "success" or "failure"
  • the probability of success is the same for each trial.
One of the simplest and most used examples of a Bernoulli process is a sequence of coin tosses where, for example, a "head" would constitute a success.

As a random process, we will regard a "success" as the occurrence of an event. There is no value judgement involved in this term, for example suppose a manufacturing machine was observed over a period of time, and we were interested in how many days the machine had broken down. If the probability of breaking down was the same each day, then we could use a Bernoulli process to model this, where the machine breaking down at least once on a particular day would constitute "success" or an event. It is unlikely that the factory owner would think of this as a successful outcome!

Determinant of a Square Matrix

A determinant is a real number associated with every square matrix. I have yet to find a good English definition for what a determinant is. Everything I can find either defines it in terms of a mathematical formula or suggests some of the uses of it. There's even a definition of determinant that defines it in terms of itself.
The determinant of a square matrix A is denoted by "set A" or | A |. Now, that last one looks like the absolute value of A, but you will have to apply context. If the vertical lines are around a matrix, it means determinant.
The line below shows the two ways to write a determinant.
3
1
=
set

3
1

5
2

5
2

Determinant of a 2×2 Matrix

The determinant of a 2×2 matrix is found much like a pivot operation. It is the product of the elements on the main diagonal minus the product of the elements off the main diagonal.
a
b
= ad - bc
c
d

Properties of Determinants


  • The determinant is a real number, it is not a matrix.
  • The determinant can be a negative number.
  • It is not associated with absolute value at all except that they both use vertical lines.
  • The determinant only exists for square matrices (2×2, 3×3, ... n×n). The determinant of a 1×1 matrix is that single value in the determinant.
  • The inverse of a matrix will exist only if the determinant is not zero.

Box-Jenkins Models for Time Series


A mathematical model designed to forecast data within a time series. The Box-Jenkin model alters the time series to make it stationary by using the differences between data points. This allows the model to pick out trends, typically using auto regresssion, moving averages and seasonal differencing in the calculations.

Autoregressive Integrated Moving Average (ARIMA) models are a form of Box-Jenkins model.

Estimations of the parameters of the Box-Jenkins model is very complicated and is most often achieved through the use of software. The model was created by two mathematicians, George Box and Gwilym Jenkins, and outlined in their 1970 paper, "Time Series Analysis: Forecasting and Control."

Multi-stage sampling & Multi-phase sampling

Multistage sampling refers to sampling plans where the sampling is carried out in stages using smaller and smaller sampling units at each stage.[1]
Multistage sampling can be a complex form of cluster sampling... Cluster because sampling is a type of sampling which involves dividing the population into groups (or clusters). Then, one or more clusters are chosen at random and everyone within the chosen cluster is sampled.
Using all the sample elements in all the selected clusters may be prohibitively expensive or unnecessary. Under these circumstances, multistage cluster sampling becomes useful. Instead of using all the elements contained in the selected clusters, the researcher randomly selects elements from each cluster. Constructing the clusters is the first stage. Deciding what elements within the cluster to use is the second stage. The technique is used frequently when a complete list of all members of the population does not exist and is inappropriate.
In some cases, several levels of cluster selection may be applied before the final sample elements are reached. For example, household surveys conducted by the Australian Bureau of Statistics begin by dividing metropolitan regions into 'collection districts' and selecting some of these collection districts (first stage). The selected collection districts are then divided into blocks, and blocks are chosen from within each selected collection district (second stage). Next, dwellings are listed within each selected block, and some of these dwellings are selected (third stage). This method makes it unnecessary to create a list of every dwelling in the region and necessary only for selected blocks. In remote areas, an additional stage of clustering is used, in order to reduce travel requirements.[2]
Although cluster sampling and stratified sampling bear some superficial similarities, they are substantially different. In stratified sampling, a random sample is drawn from all the strata, where in cluster sampling only the selected clusters are studied, either in single- or multi-stage.

Multi-phase sampling

A sampling procedure in which some information is collected from the whole sample and additional information is collected, at the same time or later, from sub samples of the entire sample (i.e. some units provide more information than others).
A multi-phase sample collects basic information from a large sample of units and then, for a sub sample of these units, collects more detailed information. The most common form of multi-phase sampling is two-phase sampling (or double sampling), but three or more phases are also possible.

Multi-phase sampling is useful when the frame lacks auxiliary information that could be used to stratify the population or to screen out part of the population.

Pascal Distribution

The shorthand X ∼ Pascal(n, p) is used to indicate that the random variable X has the Pascal distribution positive integer parameter n and real parameter p satisfying 0 < p < 1. A Pascal random variable X has probability mass function
f (x) = ?n − 1 + x? p n (1 − p)x x = 0, 1, 2, . . . . x

The Pascal distribution is also known as the negative binomial distribution. The Pascal distribution can be used to model the number of failures before the nth success in repeated mutually independent Bernoulli trials, each with probability of success p. Applications include acceptance sampling in quality control and modelling demand for a product. The probability mass function for three different parameter settings is illustrated below. 

Concept of Maxima and Minima

In mathematical analysis, the maxima and minima (the plural of maximum and minimum) of a function, known collectively as extrema, are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema).[1][2][3] Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.

As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum.

What are matrices? How are determinants different from matrices? Discuss few applications of matrices in business.


In mathematics, a matrix (plural matrices) is a rectangular array[1]—of numbers, symbols, or expressions, arranged in rows and columns[2][3]—that is treated in certain prescribed ways. One such way is to state the order of the matrix. For example, the order of the matrix below is 2x3, because there are two rows and three columns. The individual items in a matrix are called its elements or entries.


Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second


Determinants:

A determinant is a component of a square matrix and it cannot be found in any other type of matrix. A determinant is a real number that can be informally considered as the result of solving a square matrix. Determinant is denoted as set (matrix A) or |A|. It may seem like the absolute value of A, but in this case it refers to determinant of matrix A. The determinant of a square matrix is the product of the elements on the main diagonal minus the product of the elements off the main diagonal.

Let’s assume the example of matrix B:




The determinant of matrix B or |B| would be 4 x 6 – 6 x3. This would give the determinant as 6.
.The determinant is a real number, it is not a matrix.
  • The determinant can be a negative number.
  • It is not associated with absolute value at all except that they both use vertical lines.
  • The determinant only exists for square matrices (2×2, 3×3, ... n×n). The determinant of a 1×1 matrix is that single value in the determinant.
  • The inverse of a matrix will exist only if the determinant is not zero.

Applications of matrices in business.

  1. Matrices are used in representing the real world data's like the traits of peoples' population, habits, etc. They are best representation methods for plotting the common survey things.
  2. Matrices are used in calculating the gross domestic products in economics which eventually helps in calculating the goods production efficiently.
  3. In robotics and automation, matrices are the base elements for the robot movements. The movements to the robots are programmed with the calculations of matrices's row and columns. The inputs for the controlling robots are given based on the calculations from matrices.


Explain various methods of software process models.


Software Process Models: A system too large for one person to build is usually also too large to build without an overall plan that coordinates the people working on it, the tasks that need to be done, and the artifacts that are produced. Researchers and practitioners have identified a number of software development process models for this coordination. Here are some of the main ones.
These process models are alternatives, but not exclusive ones: most describe different aspects of a process, and it is common for a development group to be following two or more simultaneously. For example,

  • The sashimi process is a way of organising a waterfall with feedback.
  • Bohemia's spiral model example uses prototyping as the model for each cycle, and portions of a waterfall model for the delivered system stage of the prototyping model.
  • an incremental process often uses a sashimi process for its Produce a build stage. 

Code-and-fix
This simple process is often said to be what unsophisticated developers follow spontaneously . It provides no guidance for dividing up the task of producing software. It doesn't distinguish the various development artifacts (they may not even be present, except for the code).
In this process, developers write code, fix the problems they notice, and repeat. There is no guidance to help developers converge to an appropriate result

Sequential processes
Sequential processes divide up software development by the distinguished activities of software development, each one associated with a distinct kind of artifact (Table below), and then do one after another in some pattern. The activities and artifacts have a linear dependency relationship: each activity's artifacts depend on the artifacts produced by the activities above it in the table, and if a higher artifact changes, all lower artifacts may have to change to match it. This chain of dependence affects all software development; in processes that don't produce all these artifacts, such as XP in which requirements and specifications are not produced, the kind of knowledge that would go into the absent artifact still participates in the chain of dependence.



Activity
Artifacts
Requirements
Requirements and specification
Architecture
The system architecture, division into modules, and module interfaces
Implementation
The source code
Testing
The module, subsystem, system, and acceptance tests
Deployment
The distribution package
Maintenance
Bug reports and modified artifacts



The sequential processes make the development activities the top-level elements of the process, and deal with the chain of dependence by doing up-chain activities before down-chain activities in some pattern. Waterfall The waterfall model was in use as early as the late 1950's. It was first described explicitly (byRoyce in 1970) as a way software should not be produced.



Prototyping
In this approach, an initial prototype is produced (by whatever development process is desired) and used by stakeholders in order to validate the requirements and identify problems and promising solutions. The prototype must exhibit at least enough of the eventual system's intended characteristics for the stakeholders to evaluate it, but typically a prototype will run slower, lack the desired functionality, and afford only incomplete functionality. The final system is then developed from scratch (again by whatever process is desired) benefiting from the lessons learned.
The use of prototypes has been common in engineering and in less formal approaches for building just about anything. For building physical objects, a prototype is often a model at reduced scale.
Cyclical processes
In contrast to sequential processes, in which a list of distinguished activities are done one after another, cyclical processes do the same thing over and over. The goal is that each cycle brings the development closer to its successful completion. The various cyclical processes choose different things to do over and over, and may have specific relations between successive cycles.
Spiral
The spiral software process is a cyclical model whose steps are not the activities of development (requirements, architecture, etc.) but rather four phases for addressing whatever problem has the greatest risk of causing the development to fail. For each cycle, developers follow these phases: 1. Each cycle addresses the highest-risk problem that the developers face. Determine the objectives of this cycle, the alternative solutions that may be considered, and the constraints that must be met. 2. Evaluate the alternatives. For each one, identify the risks involved and (if possible) figure out how to resolve them. 3. Develop the solution to this cycle's problem, and verify that it is acceptable. 4. Plan the phases of the next cycle (including, of course, deciding which problem now constitutes the highest risk).

Notice that at the end of each cycle, the developers have a product. Initially this product may be an overall concept; as the spiral process goes through successive cycles, the product becomes an implementation. Figure 7 shows an example application of the spiral process to a system's development. This figure is Boehm's original figure from his 1988 paper.
In the first (innermost) cycle of the example, beginning in the upper left quadrant,
1. the developers determine the objectives of the system;
2. then they do a risk analysis and produce a prototype as the second phase (evaluate alternatives, identify and resolve risks);
3. developers then do simulations (using the prototype), model problematic aspects, and run benchmarks. before choosing a concept of operation (probably we would call this an overview of the system); and finally
4. developers make a plan for determining the system's requirements and generally for the system's entire development and operational life.

In the second cycle of the example, developers determine the cycle's objectives, etc., and perform a more substantial risk analysis, then produce a second prototype. As in every cycle, they then do simulations, etc. In this cycle, they next develop and validate requirements for the system. In the planning phase they produce a plan for the development.
In the third cycle, the highest current risk is determined to be the software's architecture and design. The developers perform a risk analysis and produce yet another prototype (the third one). After further simulations, etc., the developers produce an architecture and design and validate and verify them. Finally, they plan the integration and testing.
In the fourth cycle, developers focus on detailed design and implementation. As always, they do a risk analysis and produce a prototype; this prototype is an operational one that can be evaluated in terms of the system's eventual operation. After simulations, etc., developers produce a detailed design; implement the modules and unit-test them; integrate the modules (probably in several steps at several levels) and test the results; and put the system through its acceptance test. At this point development may have reached a successful conclusion; if not, another cycle will be needed.
Unified
The Rational Unified Process or is perhaps the only one discussed here whose use was and is promoted and supported by a specific company whose business is based on it (Rational Software, now owned by IBM). RUP can be characterised as a spiral process, with each iteration driven by risk mitigation, within which the activities follow a waterfall or sashimipattern.
Iterations are grouped into four successive phases, one or more iterations per phase, in which first the early activities, then progressively later activities, are dominant:



  • Inception 
  • Elaboration 
  • Construction 
  • Transition

and the activities are categorised into nine disciplines, comprising six engineering disciplines:

  • Business Modelling 
  • Requirements 
  • Analysis and Design 
  • Implementation 
  • Test 
  • Deployment

and three supporting disciplines whose activities cross-cut the engineering disciplines:
1. Environment 2. Configuration and Change Management 3. Project Management
The distinctive features of RUP are in the details of its prescriptions for requirements, analysis, and design (specifically in how development knowledge from one artifact type directs the next kind), beneath the level of abstraction of software processes, and are not discussed here. The name Rational Unified Process arose from the history of the company promoting it. The company was originally named Rational Machines and produced Ada development tools in the 1980s. In the 1990s Rational hired or bought the company of each of the three prominent object-oriented methodology proponents (Booch, Rumbaugh, and Jacobson), whose competing design notations and processes and were analogous but distinct, after which they developed a single notation and a single process unifying the concepts behind all three, namely and RUP.
Incremental
An incremental process is one in which the functionality of the desired system is divided into small increments that are implemented and delivered one after another in quick succession. Each increment is chosen so that it expands on the previous one, and is small enough to produce quickly.The most important functionality is implemented first, in the earlier increments (Royce1990-tapm). The initial increment produces a running system that does next to nothing, but [does] it correctly (Brooks p.267). Builds are frequent, typically daily.
An incremental process has several advantages:

  • Stakeholders start seeing results early, with the first increment, and are calmed.
  • Developers start seeing results early, too, and are encouraged.
  • If the system isn't turning out as the stakeholders expected, everyone finds out sooner.
  • Short increment cycle times mean development is less likely to get seriously off track (there isn't time to).
  • Daily build cycles mean each developers is forced to focus and fix bugs as they arise.
  • Because everyone has to focus on what is important in order to prioritise system features and allocate them to increments, stakeholders and developers tend not to waste time on inessentials.
  • Each increment (if well chosen) is of manageable size for the developers.
  • Everyone is happier because the system is visibly working (incomplete, but working) from the beginning.
  • Virtually all modern development processes are incremental.

Test-driven
The test-driven software process is the one followed for agile development, extreme programming, and similar approaches. It is an incremental approach in which each increment is defined by a new test. Each iteration of the cycle produces a running system that passes all its tests (Williams+Maximilien+Vouk2003-tddd).

  • The cycle begins when a test is added for a new desired behaviour.
  • The developers run all the tests on the current system; the new test is required to fail. It should fail, because it tests a behaviour that hasn't been implemented yet. The failure of the new test indicates that it isn't accidentally testing for something the software already does, or that a mistake in the new test makes it always succeed.
  • If the new test doesn't fail, the developers have to back up, figure out why, and fix the test so it fails.
  • The developers repeatedly write some code, and run the tests again; this may go on for some time. Typically some of the old tests will fail as the new code changes how the system behaves, until the system is fixed so that it passes all the old tests again. The old tests are acting as regression tests.
  • Eventually the new code successfully implements the new behaviour without breaking any of the old behaviour, and the system passes all its tests.
  • Of course, at this point the system works but its architecture and module design may be bad. If necessary, the developers refactor the system (change the form of the code without changing its meaning, or what it does) until the architecture and design are good enough.
  • Now the system works properly (for the behaviours requested so far) and its architecture and design are fine. The stakeholders can try it out to see if they like what there is so far. If the system is good enough for the stakeholders, the development is complete. If not, the stakeholders request a new behaviour and the cycle begins again.

Agile

Agile describes a group of related development processes that are usually presented in opposition to traditional development processes such as the waterfall and spiral models. Agile development is characterised by an emphasis on small teams of skilled individual developers, changing requirements, frequent version deliveries, daily contact with stakeholders, possibly with one or more captive stakeholders that sit with the team constantly, and face-to-face interactions. See http://agilemanifesto.org/ for the principles behind agile development.
A related approach from the 1980's, Rapid Application Development or RAD, was similar to agile development in many ways but (as far as I can determine) was not test-driven.
Extreme Programming
Perhaps the most prominent agile methodology is the test-driven Extreme Programming, or (Beck2000-epee). In XP, a group of up to about a dozen highly-skilled developers and a stakeholder sit in the same room. The developers work in pairs. Requirements are expressed as tests, and the tests are the requirements. There is no documentation and no artefacts other than the tests and the code. Everything else is handled by face-to-face discussions (thus the need for everyone to be in one room).

Scrum.

The Scrum process organises development into a sequence of sprints, each of which results in a potentially usable product with an added increment of function. The tasks for each sprint are set, in consultation with a stakeholder representative, during a sprint planning meeting and cannot be added to during the sprint. Each task is typically expressed as a user story. Each sprint is time boxed: the end date of the sprint does not change. Tasks that can't be accomplished in time are returned by the team to the backlog for future consideration.
During a sprint, the team has a brief Daily Scrum meeting, facilitated by the designated Scrum master, in which team members say what they did yesterday, what they are going to do today, and what obstacles are in their way. No brainstorming or discussion is allowed; anything other than answering the three questions is deferred to meetings among the specific people involved. Scrum uses the sashimi process for the work process, and has some agile characteristics but is not test-driven and does not involve daily contact with stakeholders.
Takeuchi and Nonaka (1986) use the metaphor of rugby, and talk of moving the scrum downfield [p.138] to describe all members of a team moving en masse towards their goal, but don't specifically describe a scrum process. The Scrum software process appears to have been first described by DeGrace and Stahl (1990), who say it originated in camera and automobile development, and introduce it with If Scrum were applied to software development, it would go something like this. Beedle et al. (1999) and Rising and Janoff (2000) are important early reports on Scrum for software development.

Define SCM standards.


The purpose of Software Configuration Management is to establish and maintain the integrity of the products of the software project throughout the project's software life cycle. Software Configuration Management involves identifying configuration items for the software project, controlling these configuration items and changes to them, and recording and reporting status and change activity for these configuration items [SEI 2000a].
Configuration management (CM) refers to a discipline for evaluating, coordinating, approving or disapproving, and implementing changes in arti-facts that are used to construct and maintain software systems. An artifact may be a piece of hardware or software or documentation. CM enables the management of artifacts from the initial concept through design, implementation, testing, baselining, building, release, and maintenance.
At its heart, CM is intended to eliminate the confusion and error brought about by the existence of different versions of artifacts. Artifact change is a fact of life: plan for it or plan to be overwhelmed by it. Changes are made to correct errors, provide enhancements, or simply reflect the evolutionary refinement of product definition. CM is about keeping the inevitable change under control. Without a well-enforced CM process, different team members (possibly at different sites) can use different versions of artifacts unintentionally; individuals can create versions without the proper authority; and the wrong version of an artifact can be used inadvertently. Successful CM requires a well-defined and institutionalised set of policies and standards that clearly define

  • the set of artefacts (configuration items) under the jurisdiction of CM
  • how artefacts are named
  • how artefacts enter and leave the controlled set
  • how an artefact under CM is allowed to change
  • how different versions of an artefact under CM are made available and under what conditions each one can be used
  • how CM tools are used to enable and enforce CM
  • These policies and standards are documented in a CM plan that informs everyone in the organisation just how CM is carried out.

What is software cost estimation ?


Software cost estimation can be defined as the approximate judgement of the costs for a project. Cost estimation will never be an exact science because there are too many variables involved in the calculation for a cost estimate, such as human, technical, environmental, and political. Further more, any process that involves a significant human factor can never be exact because humans are far too complex to be entirely predictable. Furthermore, software development for any fair-sized project will inevitably include a number of tasks that have complexities that are difficult to judge because of the complexity of software systems. Cost estimation is usually measured in terms of effort. The most common metric used is person months or years (or man months or years). The effort is the amount of time for one person to work for a certain period of time. It is important that the specific characteristics of the development environment are taking into account when comparing the effort of two or more projects because no two development environments are the same.
Cost estimation is an important tool that can affect the planning and budgeting of a project. Because there are a finite number of resources for a project, all of the features of a requirements document can often not all be included in the final product. A cost estimate done at the beginning of a project will help determine which features can be included within the resource constraints of the project (e.g., time). Requirements can be prioritised to ensure that the most important features are included in the product. The risk of a project is reduced when the most important features are included at the beginning because the complexity of a project increases with its size, which means there is more opportunity for mistakes as development progresses. Thus, cost estimation can have a big impact on the life cycle and schedule for a project.
Cost Estimation Process In order to understand the end result or the outputs of the software cost estimation process we must first understand what is software cost estimation process. By definition, software cost estimation process is a set of techniques and procedures that is used to derive the software cost estimate. There is usually a set of inputs to the process and then the process uses these inputs to generate or calculate a set of outputs.
Classical View Most of the software cost estimation models views the estimation process as being a function that is computed from a set of cost drivers. And in most cost estimation techniques the primary cost driver or the most important cost driver is believed to be the software requirements. As illustrated in figure 1, in a classical view of software estimation process, the software requirements are the primary input to the process and also form the basis for the cost estimation. The cost estimate will then be adjusted accordingly to a number of other cost drivers to arrive at the final estimate. So what is cost driver? Cost driver is anything that may or will affect the cost of the software. Cost driver are things such as design methodology, skill-levels, risk assessment, personnel experience, programming language or system complexity.
In a classical view of the estimation process, it will generate three outputs - efforts, duration and loading. The following is a brief description of the outputs:

  • Manpower loading - number of personnel (which also includes management personnel) that are allocated to the project as a function of time. 
  • Project duration - time that is needed to complete the project. 
  • Effort - amount of effort required to complete the project and is usually measured in units as man-months (MM) or person-months (PM). 


The outputs (loading, duration and effort) are usually computed as fixed number with or without tolerance in the classical view. But in reality, the cost estimation process is more complex than what is shown in figure 1. Many of the data that are inputs to the process are modified or refined during the software cost estimation process.




Actual View In the actual cost estimation process there are other inputs and constraints that needed to be considered besides the cost drivers. One of the primary constraints of the software cost estimate is the financial constraint, which are the amount of the money that can be budgeted or allocated to the project. There are other constraints such as manpower constraints, and date constraints. Other input such as architecture, which defines the components that made up the system and the interrelationships between these components. Some company will have certain software process or an existing architecture in place; hence for these companies the software cost estimation must base their estimates on these criteria.
There are only very few cases where the software requirements stay fixed. Hence, how do we deal with software requirement changes, ambiguities or inconsistencies? During the estimation process, an experienced estimator will detect the ambiguities and inconsistency in the requirements. As part of the estimation process, the estimator will try to solve all these ambiguities by modifying the requirements. If the ambiguities or inconsistent requirements stay unsolved, which will correspondingly affect the estimation accuracy.


Expert Judgment Method Expert judgment techniques involve consulting with software cost estimation expert or a group of the experts to use their experience and understanding of the proposed project to arrive at an estimate of its cost. Generally speaking, a group consensus technique, Delphi technique, is the best way to be used. The strengths and weaknesses are complementary to the strengths and weaknesses of algorithmic method. To provide a sufficiently broad communication bandwidth for the experts to exchange the volume of information necessary to calibrate their estimates with those of the other experts, a wide band Delphi technique is introduced over standard Deliphi technique.
Top-Down Estimating Method Top-down estimating method is also called Macro Model. Using top-down estimating method, an overall cost estimation for the project is derived from the global properties of the software project, and then the project is partitioned into various low-level components. The leading method using this approach is Putnam model. This method is more applicable to early cost estimation when only global properties are known. In the early phase of the software development, It is very useful because there are no detailed information available.

Bottom-up Estimating Method Using bottom-up estimating method, the cost of each software components is estimated and then combine the results to arrive at an estimated cost of overall project. It aims at constructing the estimate of a system from the knowledge accumulated about the small software components and their interactions

Explain software engineering process.


The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering, information systems and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems.
In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system[1]: the software development process. There is general agreement among software engineers on the major steps of a software process. Figure 1 is a graphical depiction of these steps. The fourth step in the process is the post-development phase, where the product is deployed to its users, maintained as necessary, and enhanced to meet evolving requirements.
The first two steps of the process are often referred to, respectively, as the "what and how" of software development. The "Analyse and Specify" step defines what the problem is to be solved; the "Design and Implement" step entails how the problem is solved.


While these steps are common in most definitions of software process, there are wide variations in how process details are defined. The variations stem from the kind of software being developed and the people doing the development. For example, the process for developing a well-understood business application with a highly experienced team can be quite different from the process of developing an experimental artificial intelligence program with a group of academic researchers.
Among authors who write about software engineering processes, there is a good deal of variation in process details. There is variation in terminology, how processes are structured, and the emphasis placed on different aspects of the process. This chapter will define key process terminology and present a specific process that is generally applicable to a range of end-user software. The chapter will also discuss alternative approaches to defining software engineering processes.
Independent of technical details, there are general quality criteria that apply to any good process. These criteria include the following:
1. The process is suited to the people involved in a project and the type of software being developed. 2. All project participants clearly understand the process, or at minimum the part of the process in which they are directly involved. 3. If possible, the process is defined based on the experience of engineers who have participated in successful projects in the past, in an application domain similar to the project at hand. 4. The process is subject to regular evaluation, so that adjustments can be made as necessary during a project, and so the process can be improved for future projects.
As presented in this chapter, with neat graphs and tables, the software development process is intended to appear quite orderly. In actual practice, the process can get messy. Developing software often involves people of diverse backgrounds, varying skills, and differing viewpoints on the product to be developed. Added to this are the facts that software projects can take a long time to complete and cost a lot of money. Given these facts, software development can be quite challenging, and at times trying for those doing it.
Having a well-defined software process can help participants meet the challenges and minimise the trying times. However, any software process must be conducted by people who are willing and able to work effectively with one another. Effective human communication is absolutely essential to any software development project, whatever specific technical process is employed.

What is meant by formal approaches to SQA ?


Software Quality Assurance (SQA) is defined as a planned and systematic approach to the evaluation of the quality of and adherence to software product standards, processes, and procedures.SQA includes the process of assuring that standards and procedures are established and are followed throughout the software acquisition life cycle. Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits. Software development and control processes should include quality assurance approval points, where an SQA evaluation of the product may be done in relation to the applicable standards. Software Quality Assurance is an umbrella of activities that is applied throughout the software process. SQA encompasses the following:1. A quality management approach2. Effective software engineering technology (methods and tools)3. Formal technical reviews that are applied throughout the software process4. A multi-tiered testing strategy5. Control of software documentation and the changes made to it.6. A procedures to assure compliance with the software development standards7. Measurements and reporting mechanisms

How to define a task network ?


A task network, also called an activity network, is a graphic representation of the task flow for a project. It is sometimes used as the mechanism through which task sequence and dependencies are input to an automated project scheduling tool. In its simplest form (used when creating a macroscopic schedule), the task network depicts major software engineering tasks.


The concurrent nature of software engineering activities leads to a number of important scheduling requirements. Because parallel tasks occur asynchronously, the planner must determine intertask dependencies to ensure continuous progress toward completion. In addition, the project manager should be aware of those tasks that lie on the critical path. That is, tasks that must be completed on schedule if the project as a whole is to be completed on schedule.

What are the software planning objectives ?


The objective of software project planning is to provide a framework that enables the manager to make reasonable estimate of:

  • Resources
  • Cost
  • Schedule

These estimates are made with in a limited time frame at the beginning of a software project and should be updated regularly as the project progresses. In addition , estimates should attempt to define best case and worst-case scenarios so that project outcomes can be bounded.
Planning is one of the most important management activities and is an ongoing effort throughout the life of the project. Software project management begins with a set of activities that are collectively called project planning. The software project planner must estimate following things before a project begins:

  • How much will it cost?
  • How long will it take?
  • How many people will it take?
  • What might go wrong?

How to evaluate a software product ?

Software Product Evaluation can be regarded as an instrument which will support such control. During a Software Product Evaluation the fit between the software product and the needs of that product are determined. This fit concerns both explicit and implicit needs about the product, often referred to as software product quality. By on the one hand examining the needed level of product quality, and on the other hand examining whether a product meets that level of quality, fitness for use is evaluated. This can be done during several phases of development and use, which results in increased control during the transformation from investment decision to actual implementation.
The relation between Evaluation of Information Technology and Software Product Evaluation is visualised in Figure 1. The left side depicts the development of investment proposals during Evaluation of Information Technology. The decision process is not refined and assumed to be a certain process that comes up with a ‘best’ proposal. On the right side, Software Product Evaluation is depicted as a process during implementation . The result of evaluation is software product quality at different moments. There is a relation between the investment proposal and the actual software product quality. The initial investment proposal is translated into software product quality from the start of implementation. During implementation several versions of the product can be compared to the intended software product quality of the investment proposal.
Software Product Evaluation becomes a growing market within the software industry. Customers and users get the opportunity to have a potential product evaluated to their needs and start demanding such evaluations more and more. On the other hand, certification institutes and evaluation companies push their evaluations into the market and increase revenues because of this new and well received services. And also, producers of these software products are confronted with Software Product Evaluations after their products have been developed. Therefore industry becomes pro-active to such evaluations and change development processes in such a way that the evaluation demands are directly addressed.
Software Product Evaluation addresses software product quality. Quality characteristics are used as attributes to describe a software product. During the short history of software engineering several quality models -in which the relations between the quality characteristics are determined- are presented. Each of these quality characteristics is split in several sub characteristics. For example quality characteristic ‘maintainability’ is divided into four quality sub characteristics: analysability, changeability, stability and testability. Below an overview of the ISO 9126 standard is presented.
Evaluations of software products must be objective - based upon observation, not opinion. They should also be reproducible. Evaluation of the same product to the same evaluation specification by different evaluators should produce results that can be accepted as identical and repeatable. To do so, procedures for project control and judgement are necessary. On the main level an evaluation process should be defined. During the Scope-project such an evaluation process is defined. The process was originally presented in five steps: analysis of evaluation requirements, specification of the evaluation, design and plan the evaluation, perform the evaluation and reporting.

Differentiate system software and application software.


System software and application software are computer programs. The system software is also installed during the installation of the operating system. However, the application software utilises the capabilities of the computer on which it is installed.

Difference between system software and application software
• System software gets installed when the operating system is installed on the computer while application software is installed according to the requirements of the user.
• System software includes programs such as compilers, debuggers, drivers, assemblers while application software includes media players, word processors, and spreadsheet programs.
• Generally, users do not interact with system software as it works in the background whereas users interact with application software while doing different activities.
• A computer may not require more than one type of system software while there may be a number of application software programs installed on the computer at the same time.
• System software can run independently of the application software while application software cannot run without the presence of the system software.

Money Market VS Capital Market

Money markets are used for a short-term basis, usually for assets up to one year. Conversely, capital markets are used for long-term assets, which are any asset with maturity greater than one year. Capital markets include the equity (stock) market and debt (bond) market. Together the money and capital markets comprise a large portion of the financial market and are often used together to manage liquidity and risks for companies, governments and individuals.
Capital Markets
Capital markets are perhaps the most widely followed markets. Both the stock and bond markets are closely followed and their daily movements are analysed as proxies for the general economic condition of the world markets. As a result, the institutions operating in capital markets - stock exchanges, commercial banks and all types of corporations, including nonbank institutions such as insurance companies and mortgage banks - are carefully scrutinised.
The institutions operating in the capital markets access them to raise capital for long-term purpose, such as for a merger or acquisition, to expand a line of business or enter into a new business, or for other capital projects. Entities that are raising money for these long-term purposes come to one or more capital markets. In the bond market, companies may issue debt in the form of corporate bonds, while both local and federal governments may issue debt in the form of government bonds. Similarly, companies may decide to raise money by issuing equity on the stock market. Government entities are typically not publicly held and, therefore, do not usually issue equity. Companies and government entities that issue equity or debt are considered the sellers in these markets.
The buyers, or the investors, buy the stocks or bonds of the sellers and trade them. If the seller, or issuer, is placing the securities on the market for the first time, then the market is known as the primary market. Conversely, if the securities have already been issued and are now being traded among buyers, this is done on the secondary market. Sellers make money off the sale in the primary market, not in the secondary market, although they do have a stake in the outcome (pricing) of their securities in the secondary market.


Money Market
The money market is often accessed alongside the capital markets. While investors are willing to take on more risk and have patience to invest in capital markets, money markets are a good place to "park" funds that are needed in a shorter time period - usually one year or less. The financial instruments used in capital markets include stocks and bonds, but the instruments used in the money markets include deposits, collateral loans, acceptances and bills of exchange. Institutions operating in money markets are central banks, commercial banks and acceptance houses, among others.



Money markets provide a variety of functions for either individual, corporate or government entities. Liquidity is often the main purpose for accessing money markets. When short-term debt is issued, it is often for the purpose of covering operating expenses or working capital for a company or government and not for capital improvements or large scale projects. Companies may want to invest funds overnight and look to the money market to accomplish this, or they may need to cover payroll and look to the money market to help. The money market plays a key role in ensuring companies and governments maintain the appropriate level of liquidity on a daily basis, without falling short and needing a more expensive loan or without holding excess funds and missing the opportunity of gaining interest on funds.

Blog Archive

Powered by Blogger.

Contributors

 

© 2013 MBA EXAM PAPER. All rights resevered. Designed by Templateism

Back To Top