SOFTWARE DEVELOPMENT WITH VISUAL BASIC
Subject Description : This Course aids the students to develop an front end application using Visual Basic.
Goals : To enable the students to develop a front end tool for Customer Interaction in Business.
Objectives : After the successful completion of the course the student must be able to develop an application using Visual Basic.
Unit – I Introduction – Client/Server – Benefits of Client/Server – Downsizing – Upsizing – Right sizing – Client/Server Models – Distributed Presentation – Remote Presentation – Remote Data – Distributed Logic – Distributed Data – Client/Server Architecture – Technical Architecture – Application Architecture – Two Tier Architecture – Three Tier Architecture OLTP & n Tier Architecture.
Unit – III Functions – Procedures – Control Structure : If - Switch – Select – For – While – Do While - Arrays – User Defined Data Types – Data Type Conversions - Operators – String Functions – Date and Time Functions.
Unit – IV Creating and Using Standard Controls: Form, Label, Text box, Command Button, Check Box, Option Button, List Box, Combo Box, Picture Box, Image Controls, Scroll Bars – Drive List Box – Directory List Box - Time Control, Frame, Shape and Line Controls – Control Arrays – Dialog Boxes - Single Document Interface (SDI) – Multiple Document Interface (MDI) – Menus – Menu Editor – Menu Creation.
Unit – V Data Controls – Data Access Objects (DAO) – Accessing and Manipulating Databases – Recordset – Types of Recordset – Creating a Recordset – Modifying, Deleting Records – Finding Records - Data Report – Data Environment – Report - Designer – Connection Object – Command Object – Section of the Data Report Designer – Data Report Controls
About client/server:
Client/server Architecture outline:
A form of distributed processing
Hardware
LAN
Back-end Server
Front-end station
Software
Communication software
Back-end software
Front-end tool
Applications
Client-Server Databases
E-mail software
GroupWare
Client-Server computing:
“Client/server is
a computational architecture that involves client processes requesting service
from server processes.”
Client/server computing is the
logical extension of modular programming. Modular programming has as its
fundamental assumption that separation of a large piece of software into its
constituent parts ("modules") creates the possibility for easier development
and better maintainability.
Client/server computing takes this a
step farther by recognizing that those modules need not all be executed within
the same memory space. With this architecture, the calling module becomes the
"client" (that which requests a service), and the called module
becomes the "server" (that which provides the service).
The logical extension of this is to have
clients and servers running on the appropriate hardware and software platforms
for their functions. For example, database management system servers running on
platforms specially designed and configured to perform queries, or file servers
running on platforms with special elements for managing files. It is this latter perspective that has created
the widely-believed myth that client/server has something to do with PCs or
UNIX machines.
Client process:
“The
client is a process or program that sends a message to a server process or
program, requesting the server to perform a task or service.”
Client
programs usually manage the user-interface portion of the application, validate
data entered by the user, dispatch requests to server programs, and sometimes
execute business logic. The client-based process is the front-end of the
application that the user sees and interacts with.
The
client process contains solution-specific logic and provides the interface
between the user and the rest of the application system. The client process
also manages the local resources that the user interacts with such as the
monitor, keyboard, workstation CPU and peripherals.
One of
the key elements of a client workstation is the graphical user interface (GUI).
Normally a part of operating system i.e. the window manager detects user
actions, manages the windows on the display and displays the data in the
windows.
Server process:
“A server process is a process or program
that fulfills the client request by performing the task requested.”
Server
programs generally receive requests from client programs, execute database
retrieval and updates, manage data integrity and dispatch responses to client
requests. Sometimes server programs execute common or complex business logic.
The
server-based process may run on another machine on the network. This server
could be the host operating system or network file server; the server is then
provided both file system services and application services. Or in some cases,
another desktop machine provides the application services.
The
server process acts as a software engine that manages shared resources such as
databases, printers, communication links, or high powered-processors. The
server process performs the back-end tasks that are common to similar
applications.
Cooperative Processing:
“Cooperative processing is computing
which requires two or more distinct processors to complete a single
transaction.”
Cooperative
processing is related to both distributed and client/server processing. It is a
form of distributed computing where two or more distinct processes are required
to complete a single business transaction.
Usually,
these programs interact and execute concurrently on different processors.
Cooperative processing can also be considered to be a style of client/server
processing if communication between processors is performed through a message
passing architecture.
Distributed
Processing:
“Distributed processing is the
distribution of applications and business logic across multiple processing
platforms.”
Distributed
processing implies that processing will occur on more than one processor in
order for a transaction to be completed.
In other
words, processing is distributed across two or more machines and the processes
are most likely not running at the same time, i.e. each process performs part
of an application in a sequence. Often the data used in a distributed
processing environment is also distributed across platforms.
Ø TWO-TIER ARCHITECTURE:
“Two-tier
architecture is where a client talks directly to a server, with no intervening
server. It is typically used in small
environments (less than 50 users)”
In two-tier
client/server architectures, the client interface is usually located in the
user's desktop environment and the database management services are in a more
powerful server that services many clients. The user system interface
environment and the database management server environment split the processing
management duties. The database management server contains stored procedures
and triggers.
Two-tier architectures are typical of
Ø environments with few clients
Ø homogeneous environments
Ø closed environments (e.g. DBMS)
The characteristics of two-tier architecture include:
1. Application components are
distributed between the server and client software.
2. In addition to part of the
application software, the server also stores the data and all data accesses are
through the server.
3. The presentation to the user is
handled strictly by the client software.
4. The PC clients assume the bulk of
the responsibility for the application logic.
5. The server assumes the bulk of
the responsibility for data integrity checks, query capabilities, data
extraction and most of the data intensive tasks, including sending the appropriate
data to the appropriate clients.
The whole point
of client-server architecture is to distribute components of an application
between a client and a server so that, for example, a database can reside on a
server machine (for example a UNIX box or mainframe), a user interface can
reside on a client machine (a desktop PC), and the business logic can reside in
either or both components.
Client/server applications
started with a simple, 2-tiered model consisting of a client and an application
server.
Fat
Client/Server Deployment:
Fat Client/Server Deployment
The most common implementation
is a 'fat' client - 'thin' server architecture, placing application logic in
the client. The database simply reports the results of queries implemented via
dynamic SQL using a call level interface (CLI) such as Microsoft's Open
Database Connectivity (ODBC).
Thin
Client/Server Deployment:
Thin Client/Server Deployment
An alternate approach is to
use thin client - fat server waylays that invokes procedures stored at the
database server. The term thin client generally refers to user devices whose
functionality is minimized, either to reduce the cost of ownership per desktop
or to provide more user flexibility and mobility.
In either case, presentation
is handled exclusively by the client, processing is split between client and
server, and data is stored on and accessed through the server. Remote database
transport protocols such as SQL-Net are used to carry the transaction. The
network 'footprint' is very large per query so that the effective bandwidth of
the network, and thus the corresponding number of users who can effectively use
the network, is reduced. Furthermore, network transaction size and query
transaction speed is slowed by this heavy interaction. These architectures are
not intended for mission critical applications.
Development tools that
generate 2-tiered fat client implementations include PowerBuilder, Delphi,
Visual Basic, and Uniface. The fat server approach, using stored procedures is
more effective in gaining performance, because the network footprint, although
still heavy, is lighter than that of a fat client.
Example:
The UNIX print
spooler is an example of two-tier client-server architecture. The client (the
UNIX lp command) reads a file to be printed and passes the file's
contents to the server. The server performs a service by printing the file.
Advantages:
Accessibility: The server can be accessed
remotely and across multiple platforms.
Speed:
Good application development speed.
Durability:
Most tools for the 2-tier architecture are very robust.
Development: Ease of application development.
Economy: Lower total costs than
“mainframe legacy systems”.
User
friendly: It
uses the familiar point and click interface.
Stativity: Two-tier architectures work
well in relatively homogeneous environments with fairly static business rules.
Disadvantages:
Non-Adaptability: 2-tier architecture is not suited for dispersed,
heterogeneous environments with rapidly changing business logic.
Software Incompatibility: Because the bulk of the application logic is
on the client, there is a problem of client software version control and new
version redistribution.
Complexity: Security can be complicate because a user may require separate passwords
for each SQL server accessed.
Ø THREE-TIER ARCHITECTURE:
“Three-tier architecture introduces a server or an
"agent" between the client and the server. The role of the agent is many-fold.”
It can provide translation
services (as in adapting a legacy application on a mainframe to a client/server
environment), metering services (as in acting as a transaction monitor to limit
the number of simultaneous requests to a given server), or intelligent agent
services (as in mapping a request to a number of different servers, collating
the results, and returning a single response to the client.
The most
popular type of n-tier client-server architecture to evolve from two-tier architecture
was three-tier architecture which separated application components into three
logical tiers.
The components of three-tiered architecture are divided into three layers:
Ø A presentation layer,
Ø Functionality layer, and
Ø Data layer
Application components are well-defined and separate
processes, each running on a different platform:
1. The user
interface, which runs on the user's computer (the client).
2.
The functional modules that actually process data.
This middle tier runs on a server and is often called the application
server.
A database
management system (DBMS) that stores the data required by the middle
tier. This tier runs on a second server called the database server.
In this type of
system, the user interface tier communicates only with the business logic tier,
never directly with the database access tier. The business logic tier
communicates both with the user interface tier and the database access tier.
The 3-tier architecture attempts to
overcome some of the limitations of 2-tier schemes by separating presentation,
processing, and data into separate distinct entities. The middle-tier servers
are typically coded in a highly portable, non-proprietary language such as C.
Middle-tier functionality servers may be multithreaded and can be accessed by
multiple clients, even those from separate applications.
The client
interacts with the middle tier via a standard protocol such as DLL, API, or
RPC. The middle-tier interacts with the server via standard database protocols.
The middle-tier contains most of the
application logic, translating client calls into database queries and other
actions, and translating data from the database into client data in return. If
the middle tier is located on the same host as the database, it can be tightly
bound to the database via an embedded 3gl interface.
This yields a very highly controlled and
high performance interaction, thus avoiding the costly processing and network
overhead of SQL-Net, ODBC, or other CLIs. Furthermore, the middle tier can be
distributed to a third host to gain processing power capability.
Advantages of 3-Tier Architecture:
RPC
calls provide greater overall system flexibility than SQL calls in 2-tier
architectures.
3-tier presentation client is not
required to understand SQL. This allows firms to access legacy data, and
simplifies the introduction of new data base technologies.
It provides for
more flexible resource allocation.
Modularly
designed middle-tier code modules can be reused by several applications.
3-tier systems such as Open
Software Foundation's Distributed Computing Environment (OSF/DCE) offer
additional features to support distributed applications development.
The added modularity makes it easier to
modify or replace one tier without affecting the other tiers. Separating the
application functions from the database functions makes it easier to implement
load balancing.
Ø N-TIER ARCHITECTURE:
The 3-tier
architecture can be extended to N-tiers when the middle tier provides
connections to various types of services, integrating and coupling them to the
client, and to each other. Partitioning the application logic among various
hosts can also create an N-tiered system. Encapsulation of distributed
functionality in such a manner provides significant advantages such as
reusability, and thus reliability.
As applications
become Web-oriented, Web server front ends can be used to offload the
networking required to service user requests, providing more scalability and
introducing points of functional optimization.
In this
architecture, the client sends HTTP requests for content and presents the
responses provided by the application system.
On
receiving requests, the Web server either returns the content directly or
passes it on to a specific application server.
The
application server might then run CGI scripts for dynamic content, parse
database requests, or assemble formatted responses to client queries, accessing
dates or files as needed from a back-end database server or a file server.
Web-Oriented
N-Tiered Architecture
By
segregating each function, system bottlenecks can be more easily identified and
cleared by scaling the particular layer that is causing the bottleneck. For
example, if the Web server layer is the bottleneck, multiple Web servers can be
deployed, with an appropriate server load-balancing solution to ensure
effective load balancing across the servers as shown below.
Four-Tiered
Architecture with Server Load Balancing
Advantages:
The N-tiered approach has several benefits:
Different aspects of the application can be developed and
rolled out independently.
Servers can be optimized separately for database and
application server functions.
Servers can be sized appropriately for the requirements of
each tier of the architecture.
More overall server horsepower can be deployed.
Ø COMPARISION BETWEEN THE TIER ARCHITECTURES:
Stable, low-volume growth
|
Two-tier
|
Low reporting and batch processing
needs
|
Two-tier
|
Minor integration of other
technology (e.g., Internet)
|
Two-tier
|
LAN-based application deployment
|
Two-tier
|
Variable system deployment
scenarios at different levels of business using LANs & WANs
|
Three-tier
|
Regular changes in business logic
and rules
|
Three-tier
|
Extensive use of Internet or
telephony integration
|
Three-tier
|
WAN-based application deployment
|
Three-tier
|
Variable-demand batch processing
|
N-tier
|
Variable-demand report processing
|
N-tier
|
Web service process delivery
|
N-tier
|
Casual use by many networked
clients
|
N-tier
|
Ø
BENEFITS OF CLIENT/SERVER
A properly designed
client/server system provides a company and its employees with numerous
benefits. Such a system enables people to do their jobs better by allowing them
to focus their time and energies on acquiring new accounts, closing deals and
working with customers, rather than on administrative tasks. It provides
instant access to information for decision-making, facilitates communication,
and reduces time, effort and cost for accomplishing tasks. The following
sections outline the major benefits of client/server.
Improved Information Access
A well-designed
client/server system provides users with easy access to all the information
that they need to get their jobs done. With a few mouse button clicks, the
user-friendly front-end application displays information that the user
requests. This information may reside at different databases or even on
physically separate servers, but the intricacies of data access are hidden from
the user. The client/server system also contains powerful features that enable
the users to further analyze this retrieved information. Therefore, they can
manipulate this information to answer “what-if” questions. Because all this
information access and functionality is provided from a single system, users no
longer need to log into several different systems or depend on other people to
get their answers.
Increased Productivity
A client/server
system increases its users’ productivity by providing them with the tools to
complete their tasks faster and more easily. For example, a powerful data-entry
screen with graphical controls and programming logic to support business rules
enables users to enter information more quickly and with fewer errors and
omissions. It automatically validates information, performs calculations, and
reduces duplicate data entry. Client/server systems can be integrated with
other technologies such as e-mail, document imaging, and groupware to lead to
additional productivity gains.
Automated Business Process
A
client/server can automate a company’s business processes and be a workflow
solution by eliminating a great deal of manual labor and enabling processes to
be completed sooner with fewer errors. For example, a company’s current
business process of completing a purchase order is completely manual. It
involves searching through a cabinet to find a purchase order form, filling it
out, performing all the calculations with a calculator, determining who should
approve it, and then sending it to that person through interoffice mail. A
client/server system can automate this process and accomplish it in a fraction
of the time. An electronic version of the purchase order can be designed in the
front-end application and be available on-line. Using the GUI, a user quickly
enters the information, and the system automatically performs all the
calculations. Then the form is automatically routed across the network to the
appropriate person (based on a business rule) for approval. The approver
immediately receives the purchase order their electronic in-box for review and
does not have wait for it to through interoffice mail.
Powerful Reporting Capabilities
Because
the information in a client/server system is stored in a relational database,
the information can be easily queried for reporting purposes. Programmers can,
of course, quickly create new reports by using SQL. However, client/server
systems can provide features that enable end users to create their own reports
and customize existing ones without having to learn SQL. With these
capabilities, users can generate reports much faster than in the past and are
no longer completely dependent on IS to provide reports. Those people who used
to take a hard copy report and then retype all the information into a spreadsheet
so that they could regenerate reports save a tremendous amount of time by using
the client/server system.
Improved Customer Service
A company can
improve its customer service by providing faster answers and minimizing the
number of times that a customer has to contact the company. A client/server
enables customer service representatives to service their customer better, and
one key reason is its ability to provide information from different data
sources. A bank, for example, may have several physically separate databases.
Each of these databases stores a specific type of customer account information,
such as savings, mortgage and student loan. Currently, a customer who has all
three types of accounts with this bank and needs information on all them has to
call three different numbers, which is very inconvenient. A client/server
system can be designed to provide a customer service representative with access
to information from all three databases. Therefore, the customer only needs to
call one number. Customers are looking for this type of convenience.
Rapid Application Development
Most
client/server development tools enable programmers to create applications by
taking advantage of object-oriented programming techniques and developing
application modules. By reusing objects and code that have already been written
for existing systems, new client/server systems can be developed much faster.
GUI design tools provide drag-and-drop facilities that allow programmers to
quickly create visual screens without having to program he underlying code.
Client/server applications can be easily modified in case a change, such as a
new business rule, is necessary. In addition, client/server tools can be used
to quickly create system prototypes that enable the developer to demonstrate
the system to users and get immediate feedback.
Cost reductions and savings
A
client /server system reduces costs in a number of ways, some of which are
easier to quantify than others. Many companies have replaced their mainframe
systems with client/server and saved millions of dollars in annual maintenance
costs. Others have benefited from the on-line information access and
significantly reduced their paper –associated costs including its purchase,
storage and distribution. This on-line information also enables people to
quickly identify marketing campaigns and sales strategies that are failing and
then cancel them before wasting any more money. Because people can accomplish
their tasks faster, they save time and effort, which also translates into
financial savings. Also, as employees are empowered and able to do more, the
number of employees can be reduced ,if that is a company goal.
Increased revenue
A client
/server system does not generate revenue itself. However,by providing easy access to crucial information along with data
analysis tools, it can play a significant role in contributing to increasing
revenue by enabling people to identify opportunities and to make the right
decisions. The following are some examples of how a client/server system
contributes to increase revenue:
q Enables a new
product to be developed faster so that
it hits the market sooner
q Enables a
company to spot sales opportunities
faster
q Identifies
which marketing campaigns work well and should be used again.
q Identifies
what types of products and features a particular customer base wants
q Identifies
sales trends that you can use to your
advantage
Quick Response to the Changing
Marketplace
Businesses are
changing rapidly. the marketplace is now more competitive than ever and will
continue to be more and more so. companies are faced with the challenge of
keeping their business up-to-date, and they must do business efficiently in
order to remain in the market-place.
The computer
systems that were developed in the 1980s tended to be based around a
centralized computer system. LANs were connected to this system ,yet little or no real business
processing was done on the LANs or the PCs. Any change to the business was made
on the centralized system. if a new product
was to be sold or a new accounting system was to be implemented , it was normally placed on the main computer . as time went on
and more and more systems were placed on the centralized computer ,the costs of
running this machine rose. The time to change this system if a new business
function was needed also increased . over time, this situation has become so bad that it is not uncommon to hear of
systems taking in excess of three years to develop and implement when the
product needs to be ready for the marketplace in six months.
Ø
N-TIER ARCHITECTURE
Definition:
In software engineering, multi-tier architecture (often referred to as
n-tier architecture) is a client-server architecture in which an application is
executed by more than one distinct software agent. For example, an application
that uses middleware to service data requests between a user and a database
employs multi-tier architecture. The most widespread use of "multi-tier
architecture" refers to three-tier architecture.
CASE STUDY - TWENTIETH CENTURY FOX
Upgrading the Financials System in a High-Utilization Organization
Challenge: To upgrade the
ERP Financials system and transition to an internet-enabled self-service
applications environment while supporting a large user base and maintaining a
high level of uptime.
Solution: In order to
ensure that web-based n-tier architecture met all of Fox's requirements, the
CherryRoad team conducted a comprehensive pre-upgrade planning, load testing
and system monitoring. CherryRoad's rigorous, structured approach to load
testing incorporated a proven third-party automated testing product.
Benefits: Twentieth
Century Fox was able to roll out their Financials system to a large user
population without experiencing any significant performance issues.
Twentieth Century Fox is a $4
billion integrated entertainment company with operations in three business
segments: Filmed Entertainment, Twentieth Century Fox Television Studios, and
Cable Network Programming. A News Corporation Company subsidiary, Fox is based
in Beverly Hills, California and has more than 8,000 employees and contractors.
Transitioning to a Web-Based N-Tier Architecture
When Fox made the decision to
upgrade its ERP Financials system, it faced some of the same challenges that
many large enterprises encounter in transitioning to a web-based architecture,
including:
- Ensuring acceptable online performance for a large number (500) of end-users
- Supporting a high volume of batch processes, especially during peak periods of report processing
- Maintaining high uptime requirements
- Minimizing new hardware procurement costs
Fox engaged CherryRoad
Technologies for the upgrade, based on CherryRoad's successful past work with
the company on Financials implementations, upgrades, and evaluations.
CherryRoad had implemented Fox's Accounts Receivable and Billing systems, then
upgraded the overall Financials system, and implemented Asset Management – all
successful projects, completed on time and on budget.
For the upgrade, CherryRoad laid
out a plan to ensure that the web-based architecture met all of Fox's
requirements:
Pre-Upgrade Planning –
Before the upgrade, perform Upgrade Readiness Evaluation, including designing a
comprehensive hardware architecture that included all components, costs, and
configuration of web-based n-tier architecture.
Load Testing – During the
upgrade, utilize Segue SilkPerformer utilities to stress test the online and
batch components to determine their upper limits.
System Monitoring – For
post-production support, establish monitoring procedures and make additional
recommendations to enable IT to constantly monitor all components of the
architecture to proactively prevent issues.
Pre-Upgrade Planning
Prior to the upgrade,
CherryRoad initiated the project with a Readiness Assessment, which included
architecting the new hardware environment. To ensure the new web-based
architecture would support their extensive online and batch requirements, the
CherryRoad team used industry benchmarks, best practices, and normalized
hardware metrics to define baselines. They quickly captured critical data to
properly size infrastructure requirements and configured report servers to
eliminate bottlenecks.
The team also addressed critical
factors in designing the hardware architecture, including issues of
scalability, administration, and load balancing and failover. They used
multiple smaller servers in a server cluster – a more scalable solution than a
single large server. In addition, in selecting servers, they identified the
vendors' latest product lines, to maximize support and maintenance, and used
the initial baseline benchmarks to validate the choices. The end result was a comprehensive
specification document that included alternative hardware configurations,
server and switch model numbers, software and middleware, and detailed budget.
Fox was therefore able to procure the new hardware and receive vendor
certification well before the upgrade began.
Load Testing
CherryRoad validated the
configuration with a rigorous and structured approach to load testing with a
proven third-party automated testing product. Fox and CherryRoad partnered with
Segue, a leading provider of load testing applications, to assist in this
effort. Using the SilkPerformer product, the team simulated conditions of
high-volume online users, peak batch processing periods, and intensive
transaction processing.
A key focus was Fox's extensive
use of nVision reporting, which under the new architecture centralized all
report processing and could potentially create bottlenecks. Because of the
careful planning done before the upgrade, the configured servers were able to
pass load testing and proved that hardware issues would be minimized at the
completion of the upgrade.
System Monitoring
In order to ensure that hardware
problems are detected and proactively solved on an ongoing basis, CherryRoad
assisted Fox in implementing a systematic process of monitoring all
internet-enabled self-service applications components. This included using
utilities such as Tuxedo monitors, as well as those delivered with the Oracle
RDBMS. The Unix operating system also provides various tools that provide
statistics on system utilization. Fox is also using Segue Service Analysis
Module (SAM) to monitor back-end systems and create effective monitoring
metrics.
A Successful Launch
Transitioning to an
internet-enabled environment requires careful planning, particularly for
organizations with a large number of users and high processing requirements. It
is therefore critical that planning and testing be performed before, during,
and after the upgrade. CherryRoad was effectively helped Twentieth Century Fox
navigate through this transition.
As Cindy McKenzie, VP of
Corporate IT for Fox said, “Thanks to CherryRoad's comprehensive approach to
infrastructure design and testing, we were able to roll out our Financials
system to our large user base without experiencing any significant performance
issues.”
ØCASE STUDY OF N-TIER ARCHITECTURE
MASTER’S ACADEMY & COLLEGE
Company Overview
Master’s Academy & College, based in Calgary,
Alberta, opened its doors in 1997 and now has a total of 600 students. The
vision of the school is about creating Profound Learning, a 21st century model
for value-based education. Profound Learning aims to exceed all the current
standards set by Alberta Education, and to equip students as knowledge workers
with skills to enable them to succeed in an ever-changing world. Integral to
the Master’s philosophy is a commitment to technology. As such the school has a
better than 2:1 student to computer ratio with 300 desk top units and 50
laptops all running the Windows 2000 operating system. These client computers
are all networked around several network servers running Windows 2000 Advanced
Server to provide file sharing, email and high-speed Internet connectivity for
every student from every computer.
Business Challenge
Aside from the school’s commitment to technology,
what makes the school different, and what it considers one of its key methods
in producing superior students is the Master’s assessment system. In a
nutshell, benchmarks are set to establish a basic quality standard for student
work.
Bonus marks are available for exceeding the
quality standard (EQS) but penalties are also applied if, for example, work is
handed in late. If a student submits unsatisfactory work, then the teacher will
not accept it. Students are expected to rework their assignments until the
quality standard is met. The philosophy of this method reflects the school’s
belief that every student can produce quality work.
Master’s students are encouraged to produce
quality work handed in on time and, whenever possible, to exceed the quality
standard.
The problem Master’s faced with its marking
system was that there were several criteria to be recorded and blended before
an assignment’s final mark could be reached: the quality grade and any EQS
bonuses or penalties. What it produced for school administrators was a vast
amount of data that had to be collated before each and every grade could be
calculated.
They were using a basic database and spreadsheet
system, but the solution was cumbersome. It was awkward to enter and interpret
the data, because the system was not designed for the Master’s model. They
needed a more robust, scaleable and tailored solution. Having looked across the
market for the newest and most powerful technology the school chose Microsoft’s
.NET platform.
Solution
An early solution to the problem was tried in a
prototype form using Microsoft’s Excel spread sheet system. This allowed
teachers to compute the final mark based on all of the criteria, but it was
extremely cumbersome. This prototype was refined for two years until all the
parameters of the assessment system were in place.
EDS Canada in Calgary was called in at this stage
to develop and install a customized system. This was achieved using the
Microsoft .NET platform with the specific implementation of a SQL ServerTM
2000 database, and the creation of a user interface using the beta stage
Visual Studio development system .NET software suite. More specifically, the
interface was developed using the Visual Basic 6.0 development system and
JavaScript languages. To ensure that there was teacher-only secure access to
the network, existing Windows 2000 Active Directory director service
authentication was used. Report cards will soon be generated as PDF files using
Crystal Decisions, Crystal Reports to create a read-only document for students
and parents to see.
The N-Tier architecture was a natural choice for
the Master’s project as it offered a strong solution based on the client/server
program model. This distributed computing model is part of the fundamental
basis of the .NET platform for delivering Web services. This architecture
enables application programs to be distributed across three or more disparate
computers or servers in a distributed network environment. In this environment,
user interface programming is done on the individual user’s computer, business
processes are done on a centralized computer and data that is needed is stored
on a database managed by an alternate computer.
By utilizing the N-Tier architecture, Master’s is
able to take advantage of a network in which any one tier can run the
appropriate operating system platform or processor and can be updated
separately without disrupting any of the other tiers. This ensures that any
upgrades to the network that occur, happen seamlessly without compromising the
performance of the network.
N-Tier was also an obvious choice to avoid the
problem of having a solution directly connected to the database. That
arrangement creates data bottlenecks when too much data tries to pass through.
Using N-Tier means that if the school should decide to modify its network, then
the entire system will not need to be extensively revamped, but scaled to need.
The architecture of the complete solution allows teachers to easily implement
the schools unique marking system, while leaving it flexible enough for a wide
variety of web service expansions.
Business Benefits
The objective of a school is not to make money.
Master’s goals are qualitative, not quantitative: the number of students
produced is not as important as the quality of each student. In this objective
Master’s differs from a traditional business, because it could enroll an
endless number of students and still fail in its mission.
Where Master’s is exactly like a modern business
is in concern for lost time. The .NET platform is going to help the school
outpace other school’s by allowing it to know in advance which students needs
more assistance in a specific area:
“Typically you go for a teacher parent interview
after 3-4 months of school. You go in and they say that Johnny is not working
up to his potential. You don’t know where the missing piece is. You’d liked to
have seen something happen 3 months ago, but nobody knew what was happening.
What we’re looking at is the timeliness of relevant information being captured
and presented, and that doesn’t happen in education,” says Rudmik.
Now with the .NET platform installed Master’s can
know immediately when a block occurs in a child’s learning, because the data of
a student’s learning curve is gathered in real time from each classroom and
stored in the schools central database. Soon, this information will lose no
time being transmitted to parents, instead of taking several months before the
next parent night.
Rudmik calculates that 98 per cent of the
school’s parents have home Internet access. In the near future parents and
students will be able to access from home a continually updating report card,
so that all parties can know how a child is progressing at any given time. The
advantage is that no time is lost to get on top of a problem before it becomes
so overwhelming that the class leaves a student behind.
Once the system is connected to the web, Master’s
hopes to employ the Web Services aspect of its data collection to report to
Alberta Learning, the province’s board of education in Edmonton, continuously
and in real time. At the moment the school reports electronically and
infrequently using a cumbersome system. Reporting will become an easy,
automatic and continually updated method, all of which grows naturally out of
the robust, flexible and scalable platform provided by Microsoft’s .NET
platform.
Resulting Value
Installing the .NET platform has allowed Master’s
Academy & College to move from a difficult system which was troublesome for
the teachers to use to allowing the school to have a ticket to the coming Web
Services revolution. Says Rudmik, “It was cumbersome process with the amount of
information, data, the process, and the problems. It was a complex system we
had built. It was beyond typically what spreadsheets are used for. We brought
in the .NET platform, and it solved that one side, but it also gave us the capacity
to build toward our vision of real time reporting.”
Solving that problem also opens new
possibilities: soon it will be able to immediately communicate its findings to
all parties; and it positions this forward-looking school to being fully ready
for connecting to the world beyond Calgary, Alberta
Ø CLIENT/SERVER MODELS:
Client/server systems can be
classified based on the way in which the systems have been built. The most
widely accepted range of classifications has come from the Gartner Group, a
market research firm in Stamford, Connecticut {see Figure 1.1). Although your
system will differ slightly in terms of design, these models give you a good
idea of how client/server systems can be built.
These models are not, however,
mutually exclusive, and most good systems will use several of these styles to
be effective and efficient. Over time, client/server systems may move models as
the applications are replaced or enhanced. These models demonstrate that a full
definition of a client/server system is a system in which a client issues
requests and receives work done by one or more servers. The more servers
statement is important because the client may need to access several distinctly
separate network systems or hosts. The following sections describe each of the
five basic models.
In its simplest form, client/server
identifies a system whereby a client issues a request to a second machine
called the server asking that a piece of work is done. The client is typically
a personal computer attached to a LAN, and the server is usually a host machine
such as a PC file server, UNIX file server, or midrange/mainframe.
Gartner Group
Model
The job requests can include a variety of tasks, including,
for example:
■ Return all records from the customer file
database where name of Customer = Holly
■ Store
this file in a specific file server data directory
■ Attach
to CompuServe and retrieve these items
■ Upload
this data packet to the corporate mainframe
To enhance this definition you should
also consider the additional requirements that a ness normally has.
Model 1: Distributed Presentation
Distributed presentation
means that both the client and the server machines format the display
presented to the end user. The client machine intercepts display output from
the server intended for a display device and reroutes the output through its
own processes before presenting it to the user.
As below figure shows, the easiest
model is to provide terminal emulation on the client alongside other
applications. This approach is very easy to implement using products such as
WallData's Rumba or Attachmate but provides no real business benefit other than
to begin a migration to client/server. Sometimes a company may use a more
advanced form of terminal emulation whereby they hide the emulation screen and
copy some of its contents, normally key fields, onto a Visual Basic or Borland
Delphi screen. This copying is often referred to as screen scraping. Screen
scraping enables a company to hide its mainframe and midrange screens and
present them under a PC interface such as Windows or OS/2. The major benefit of
screen scraping is that it allows a system to migrate from an old
mainframe-based system to a new client/server system in small incremental
steps.
Figure :
Distributed presentation: terminal emulation andscreen scraping.
Network
Client : Server:
Presentation Program
logic Data
(Screen scraping
or Terminal Emulation)
Model 2: Remote Presentation
It may be
necessary to move some of the application's program logic to the PC from the
host computer. The second model, as shown in the below figure , allows for some
business/program logic as well as the presentation to reside on the PC. This
model is particularly useful when moving from a dumb terminal environment to a
PC-LAN environment. The logic can be of any type; however, validation of
fields, such as ensuring that states and zip codes are valid, are ideal types
of logic.
Model 3: Distributed Logic
A distributed logic client/server application splits the logic of the
application between the client and server processes. Typically, an
event-driven GUI application on the client controls the application flow, and
logic on the server centrally executes the business and database rules. The
client and server processes can communicate using a variety of middleware
tools, including APPC, Remote Procedure Calls (RPC), or data queues.
Differentiating between the remote presentation and distributed logic
models isn't always easy. For example, if a remote presentation application
performs some calculations with the data it receives, does it therefore become
a distributed logic application? This overlap between the models can sometimes
make the models confusing. The following figure shows the distributed logic client/server
model.
Client
Presentation Program Logic
Program Logic Data
Model 4: Remote Data
With the remote
data model, the client handles all the application logic and end-user presentation,
and the server provides only the data. Clients typically use remote SQL or Open
Database Connectivity (ODBC) to access the data stored on the server.
Applications built in this way are currently the most common in use today. The
below figure shows this model.
In the remote data model, all the application logic resides
on the PC.
Client
Presentation All Program Logic
Network
Finally, the
distributed data model uses data distributed across multiple networked systems.
Data sources may be distributed between the client and the server or multiple
servers. The distributed data model requires an advanced data management scheme
to enforce data concurrency, security, and integrity across multiple platforms.
As you would expect, this model is the most difficult client/server model to
use. It is complex and requires a great deal of planning and decision-making
to use effectively. The following figure shows this model.
The distributed data
model
Client
Presentation
All Program Logic
Some Data
Ø MIDDLEWARE:
Middleware is used to glue together
applications or components. A few examples of middleware include:
– IPC by sockets, shared memory
– TCP/IP, X.25
– Common database
– RPC, CORBA RMI
– MOM
Middleware
Connectivity allows applications to
transparently communicate with other programs or processes, regardless of their
location. The key element of
connectivity is the network operating system (NOS). NOS provides services such
as routing, distribution, messaging, file and print, and network management
services. NOS rely on communication
protocols to provide specific services.
The protocols
are divided into three groups:
1.
Media
protocols,
2.
Transport
protocols and
3.
Client-server
protocols
Media protocols determine the type of
physical connections used on a network. Some examples of media protocols are
Ethernet, Token Ring, Fiber Distributed Data Interface (FDDI), coaxial and
twisted-pair.
A transport protocol provides the
mechanism to move packets of data from client to server. Some examples of
transport protocols are Novell's IPX/SPX, Apple's AppleTalk, Transmission
Control Protocol/ Internet Protocol (TCP/IP), Open Systems Interconnection
(OSI) and Government Open Systems Interconnection Profile (GOSIP)).
Once the physical connection has
been established and transport protocols chosen, a client-server protocol is
required before the user can access the network services. A client-server
protocol dictates the manner in which clients request information and services
from a server and also how the server replies to that request. Some examples of
client-server protocols are NetBIOS, RPC, Advanced Program-to-Program
Communication (APPC), Named Pipes, Sockets, Transport Level Interface (TLI) and
Sequenced Packet Exchange (SPX).
Types of
Middleware:
1.
Remote
Procedure Calls (RPC)
client makes calls to procedures running on remote
computers synchronous and asynchronous
2.
Message-Oriented
Middleware (MOM)
asynchronous
calls between the client via message queues
3.
Publish/Subscribe
push technology à server sends information to
client when available
4.
Object Request
Broker (ORB)
object-oriented
management of communications between clients and servers
5.
SQL-oriented
Data Access
middleware
between applications and database servers
Database Middleware:
1.
ODBC–Open
Database Connectivity
Most
DB vendors support this
2.
OLE-DB
Microsoft
enhancement of ODBC
3.
JDBC–Java
Database Connectivity
Special
Java classes that allow Java applications/applets to connect to databases
Middleware Vendors:
1.
Noble Net:
Noble Net specializes in providing
high quality middleware tools for client-server development. Its premier
product is EZ-RPC, an RPC precompiler tool kit that includes an enhanced XDR
(packaged as an IDL), precompiler, and various libraries. EZ-RPC is available
on more than 40 platforms, including most UNIXes, most Windows, Macs,
VMS, OS/2, and several others.
Noble
Net also publishes a Windows rpcgen and distributes the IONA Corporation’s
Orbix Object Request Broker development toolkit. A new product, a distributed two-tier ODBC
driver SDK, is available for those working with databases. Noble Net provide
free evaluation copies of EZ-RPC to qualified programmers
2.
Piccolo:
Piccolo, from Cornerstone Software,
Inc. is a message-oriented middleware product that provides application
developers with peer-to-peer connectivity without regard for the underlying
communications transport (i.e. TCP/IP, NetBIOS, Async).
Piccolo is supported on UNIX
versions AIX, SCO, HP-UX (HP9000/700 & 800), Tandem S2 Integrity, Solaris
2.1, and Silicon Graphics (SGI). It is
also supported on Windows 3.x, Windows NT, and the Tandem Non-Stop Kernel.
Application developers use the Piccolo API to access data and applications
residing on any of the supported platforms on a network. The developers need no programming knowledge
of the underlying transport protocol.
3.
PIPES Platform:
PIPES Platform, from Peer Logic, is
message-oriented middleware that provides the essential communications services
for distributing applications across the enterprise. PIPES Platform's process-to-process messaging
allows development of applications with an asynchronous, non-blocking,
event-driven architecture. A dynamic
name service lets us find at run-time and communicate with any application resource
in the PIPES Platform network. PIPES Platform automatically maintains
information on all PIPES Platform resources, even as machines and applications
are added or moved. Session management services provide guaranteed message
delivery, integrity, prioritization, sequencing, dynamic re-routing and error
handling. PIPES Platform's cross-platform and multiprotocol support provide a
consistent communications interface that allows developers to focus on business
logic, not communications.
4.
SmartSockets:
SmartSockets, from Talarian
Corporation, is a rapid application development toolkit which enables processes
to communicate quickly, reliably, and securely across different operating
system platforms, through the use of messages. The communicating processes can
reside on the same machine, on a LAN, on a WAN, or anywhere on the Internet.
SmartSockets is an
industrial-strength package which takes care of network interfaces, guarantees
delivery of messages, handles communication protocols, and deals with recovery
after system/network failures. SmartSockets's programming model is built
specifically to offer high-speed inter process communication, scalability,
reliability and fault tolerance.
It supports a variety of
communication paradigms including publish-subscribe, peer-to-peer, and RPC.
Included as part of the package are graphical tools for monitoring and
debugging our application. SmartSockets is available on most UNIX, OpenVMS,
Windows 3.1, Windows 95, Windows NT, and OS/2.
Ø DATABASE CONNECTIVITY AND ITS NEEDS:
Open Database Connectivity (ODBC)
§
ODBC specifies standard CLI
§
ODBC is a superset of ANSI/ISO CLI
§
ODBC uses standard SQL (SQL-92)
§
ODBC defines minimum SQL for non-RDBMS data
§
ODBC Drivers expose existing functionality
§
ODBC is available on Windows, Macintosh, OS/2,
UNIX, etc.
§
ODBC is used by most commercial applications
§
ODBC has over 370 drivers from over 100
companies
§
ODBC has a speed same as native CLI
ODBC is:
– A database API specification
ODBC is not:
– A heterogeneous query engine
– A database management system
– A way to add database features
ODBC is available on Windows, Macintosh, OS/2, UNIX,
etc.
ODBC is used by most commercial applications
ODBC has over 370 drivers from over 100 companies
ODBC has a speed same as native CLI
ODBC Architecture:
ODBC Architecture
The various
components of the ODBC Architecture are described as follows:
Application
layer:
§
Only one application resides in the application
layer at a time.
§
The application Calls ODBC functions.
§
Application layer is linked to the Driver
Manager.
§
Application layer is written by many companies.
Driver
Manager:
§
One Driver Manager exists.
§
The Driver Manager loads and unloads drivers.
§
The Driver Manager implements ODBC functions.
§
The
Driver Manager passes most ODBC
function calls to drivers
§
The
Driver Manager handles backward compatibility
§
The
Driver Manager is written by Microsoft or Visigenic
Driver:
§
There may be one or more drivers per
application.
§
The driver implements ODBC functions.
§
The Driver is a thin layer over RDBMS.
§
The Driver is a thick layer over non-RDBMS
(includes SQL engine).
§
The Driver is written by small number of
companies.
Data Source:
§
There may be one or more data sources per
driver.
§
The Data Source contains actual data.
Typical examples of Data Source include RDBMS, dBase
file, spreadsheet, etc.
Ø RIGHTSIZING
As client/server
technology evolves, the battle cry is now rightsizing— design new applications
for the platform they are best suited for, as opposed to using a default
placement.
An application should run in the
environment that is most efficient for that application. The client/server
model allows applications to be split into tasks and those tasks performed on
individual platforms. Developers review all the tasks within an application and
determine whether each task is best suited for processing on the server or on
the client.
In some cases,
tasks that involve a great deal of number-crunching are performed on the server
and only the results transmitted to the client. In other cases, the workload of
the server or the trade-offs between server MIPS (millions of instructions per
second) and client MIPS, together with the communication time and network
costs, may not warrant the use of the server for data intensive, number-crunching
tasks.
Determining how
the tasks are split can be the major factor in the success or failure of a
client/server application. And if the first client/server application is a
failure, for whatever reason, it may be a long time before there is a second.
Some variations on this theme are:
Downsizing. A host-based application is downsized when it is
re-engineered to run in a smaller or LAN-based environment.
Upsizing. Applications that have outgrown their environment are
re-engineered to run in a larger environment.
Smartsizing. In contrast to rightsizing, which is technology based,
smartsizing affects the entire organizational structure and involves
re-engineering and redesigning the business process, as well as the information
systems that support the process.
Ø DOWNSIZING:
Downsizing
involves porting applications from mainframe and mid-range computers to a
smaller platform or a LAN-based client/server architecture.
One potential
benefit of downsizing is lowered costs. Computer power is usually measured in
MIPS. Currently, the cost of mainframe MIPS varies from $75,000 to $150,000;
midrange MIPS about $50,000 and desktop micro MIPS about $300. A micro that can
perform as a LAN server ranges from $1,000 to $3,000 per MIPS. As technology
improves, the costs of LAN servers and micros continue to drop. The midrange
and mainframe (host) technologies are improving at a slower rate. Their costs
are dropping at an even slower rate.
However, the
cost benefit is not as straightforward as it appears. Host MIPS are used more
efficiently and the processor has a higher utilization rate. Hosts
automatically provide services (such as backup, recovery, and security) that
must be added to LAN servers. Host software costs more than micro software, but
more copies of micro software are required. Mainframes require special rooms,
operators, and systems programmers. Micros sit on a desk. LAN servers use
existing office space and require no specialized environment.
Another way to
look at the cost benefit is to recognize where most of an organization's MIPS
are today—on the desktop! And most of those MIPS aren't fully utilized. Figure
1.4 illustrates the relationship between the number of LAN-connected micros and
the number of business micros. Gartner Group (Stamford, Connecticut) predicts
that by 1996 there will be nearly five million LANs and 75 percent of all
business micros will be connected to a LAN.
By using the
existing desktop MIPS, organizations can postpone or eliminate hardware
acquisitions. Many of these desktop machines are already linked to a central
machine using terminal emulation software, so the network is already in place.
Other potential
benefits of downsizing are improved response time, decreased systems
development time, increased flexibility, greater control, and implementation of
strategic changes in workflow processes.
In addition,
mainframe applications downsized to a desktop/LAN environment allow data to be
accessed by other applications. However, the decision to downsize should be
made on an application-byapplication basis. Downsizing the wrong application
could put an organization at risk.
According to
Theodore P. Klein, president of the Boston-based consulting firm Boston Systems
Group, Inc., an organization must answer the following questions when evaluating
applications for downsizing:
· Is the application
departmental, divisional, or enterprise-wide?
· What is the database size and
how must it be accessed?
· Is the application functionally
autonomous?
· How familiar with the new
technology are the users and IS staff?
· Is the data in the application
highly confidential?
· What level of system downtime
can be tolerated?
Downsizing is
not as easy as buying and installing hardware and software that support
client/server computing. The larger environments that these applications run on
have built-in features, such as capacity planning and performance monitoring,
that are still in their infancy in client/server platforms. As a result,
client/server environments must be fine-tuned to reduce bottlenecks and make
optimal use of processing cycles. While hardware and software cost savings may
be almost immediate and dramatic, processing savings will be slower to realize
and less impressive.
When evaluating
applications for downsizing, an organization must also recognize the political
issues involved. In many organizations, ownership of information systems
represents power. Downsizing applications changes the organizational structure.
It is important that the political issues be planned for and dealt with.
Ø UPSIZING:
Even as companies are downsizing
from their glass-housed mainframes to distributed LAN-based systems, they are
planning for the future by ensuring that these new systems are expandable. When
an application outgrows the current environment, the capacity of the
environment should be increased or the application should be ported to a larger
environment with no disruption to users.
Environments can be expanded in many ways,
which include:
Increasing memory and storage on the server ·
Swapping a more powerful processor into the server · Adding processors to the
server · Upgrading to more robust network software
For expansion to occur with a
minimum of disruption to the users, open systems (hardware and software) should
be used whenever possible.
Smartsizing:
Smartsizing is based on
re-engineering the business processes themselves, in contrast to downsizing,
which re-implements existing automated systems on smaller or LAN-based
platforms. Downsizing focuses on cost savings and increasing current productivity.
While the code for the application may be streamlined, little or no thought is
given to the process itself.
Smartsizing implies that information
technology can make the business process more efficient and increase profits.
Business reengineering focuses on using technology to streamline internal
workflow tasks, such as order entry and customer billing. Information
technology can be used to increase customer satisfaction. Products can be
developed and brought to market faster using information technology.
Ø Characteristics of client/server
architecture:
The basic
characteristics of client/server architectures are:
There is a many-to-one relationship between clients and a
server. Clients always initiate a dialog by requesting a service. Servers wait
passively for requests from clients.
§
Encapsulation of services:
The server is a specialist: when given a message
requesting a service, it determines how to get the job done. Servers can be
upgraded without affecting clients as long as the published message interface
used by both is unchanged.
§ Integrity:
The code and data for a server are centrally maintained,
which results in cheaper maintenance and the protection of shared data
integrity. At the same time, clients remain personal and independent.
§
Location transparency:
The server is a process that can reside on the same
machine as a client or on a different machine across a network. Client/server
software usually hides the location of a server from clients by redirecting
service requests. A program can be a client, a server, or both.
§
Message-based exchanges:
Clients and servers are loosely-coupled processes that
can exchange service requests and replies using messages.
§
Modular, extensible design:
The modular design of a client/server application enables
that application to be fault-tolerant. In a fault-tolerant system, failures may
occur without causing a shutdown of the entire application. In a fault-tolerant
client/server application, one or more servers may fail without stopping the
whole system as long as the services offered on the failed servers are
available on servers that are still active. Another advantage of modularity is
that a client/server application can respond automatically to increasing or
decreasing system loads by adding or shutting down one or more services or
servers.
§
Platform independence
The ideal client/server software is independent of hardware or operating
system platforms, allowing you to mix client and server platforms. Clients and
servers can be deployed on different hardware using different operating
systems, optimizing the type of work each performs.
§
Reusable code
§
Scalability
Client/server systems can be scaled horizontally or vertically.
Horizontal scaling means adding or removing client workstations with only a
slight performance impact. Vertical scaling means migrating to a larger and
faster server machine or adding server machines.
§
Separation of Client/Server Functionality
Client/server is a relationship between processes running on the same or
separate machines. A server process is a provider of services. A client is a
consumer of services. Client/server provides a clean separation of functions.
§
Shared resources
One server can provide services for many clients at the same time, and
regulate their access to shared resources.
read full content click this link: unit-1
UNIT II:
Course Objectives:
Þ Understand the benefits of using Microsoft Visual Basic 6.0 for Windows
as an application tool
Þ Understand the Visual Basic event-driven programming concepts,
terminology, and available tools
Þ Learn the fundamentals of designing, implementing, and distributing a
Visual Basic application
Þ Learn to use the Visual Basic toolbox
Þ Learn to modify object properties
Þ Learn object methods
Þ Use the menu design window
Þ Understand proper debugging and error-handling procedures
Þ Gain a basic understanding of database access and management using
databound controls
Þ Obtain an introduction to ActiveX controls and the Windows Application
Programming Interface (API)
What is Visual Basic?
· Visual Basic is a tool that allows you to develop Windows (Graphic User Interface - GUI) applications. The applications have a familiar appearance to the user.
· Visual Basic is event-driven, meaning code remains idle until called upon to respond to some event (button pressing, menu selection, ...). Visual Basic is governed by an event processor. Nothing happens until an event is detected. Once an event is detected, the code corresponding to that event (eventprocedure) is executed. Program control is then returned to the event processor.
view full materials:click here
Related ppt click here
Access full book:SOFTWARE DEVELOPMENT WITH VB 6.0