Translate To Preferred Language

Search ObiokusThoughts

Please Read Today's Featured Post

Law of Attraction

When considering the concept of the “law of attraction”, I simply reduce it to the exercise of unity progress.  As you find something that...

Wednesday, December 21, 2016

Testing Document for Software Engineering

Testing Overview
Testing was performed in the style of alpha white-box testing.  It was conducted from the perspective of the coder with prior knowledge of the system.  Unit testing was done on individual web pages.  Integration testing was from the cross referencing web pages and foreign keys attributes for records.  After each one was conducted, a complete overview of the site as a whole was the last step of system testing.  Testing was based on the requirements the group presented in the design and analysis phases.  Scaffolds were created for identified class objects except for personal information. Scaffolds in ruby provide a style sheet, a migration file for the database, a controller for page interaction, a model page for variable augmentation and 4 html pages.  Pages consist of new, show, edit and index with a default form for the attributes from instantiation.  Actual testing modules and coffeescript pages were also created but were not implemented.  Model of site were to fit the processes of the use cases and diagram charts from previous documents.  

Unit Testing
Unit testing was for each page.  Whenever a page has an error, the user is sent to an error page with the details of what has happen.  This is excellent is hinting on how to fix it as well.  Every single webpage was tested to see if it was accessible via the browser.  Once retrieved, each of its links or attribute was tested.  Each new page of a scaffold was to insert the given information into the database for a record visible from the other three pages.  Show displays the data.  To elaborate, media files in the form of video or image must be present.  Social media link should go directly to the external page of user’s input.  Edit will allow you to change any attribute available from the form.  Index is sort of a home page for all records found in the database of a given class.  This sequence was repeated for each scaffold created.  Heading styles and the application’s navigation links should also be reliable as designed from page to page.  Sample image from fileload page is attached below. 




Integration Testing
Database was scripted to have the login object’s id be a foreign key of the social media, visual media and student activity objects.  This was to ensure that all information associated with a specific login id would appear on its page.  Once a login was created, the tester would go to the show page.  From show, I could then go to one of the other scaffolds to create a new object.  The login id would be a hidden field passed as an inherited session variable.  Returning the login’s show page should now display the newly created video file, image file, Instagram link, facebook link, twitter link and activity update.  Additional route coding was added to the routes page to create the appropriate link for the necessary HTML actions.  Images are added below from the first test with internet explorer where the visual media will play.  Second image is from Mozilla firefox where activities were added but video is not supported.


System Testing
A review of overall system performance was the final stage of system testing.  Unit testing and integration testing made this simple.  Aside from review, database was checked to see if information was entered and updated.  This was completed by inserting data via the website and using the sqlite terminal to check the tables. 

Testing Results
 In the early stages there were several errors as misspellings would lead to objects not found errors.  Working through those sorts of bugs allowed for the more technical aspects of the intended design.  There was a lot of trial and error involved as changes were made followed by running the application to see the effects.  One regret was not getting the user authentication page working.  Authentication was supposed to check for the entered login name in the database.  If found the next step was to take the encrypted password found, decrypt it and match it to the user entered one.  One of the problems encounter was creating such a record would be non-efficient duplication in the database.  If the information was not entered into the database, it would not accept a direct match as the primary key was not found using the implementation of a session variable.  If I were to use a non-session variable, user would just be sent to an empty login screen.  Other than that, the final product performs to a very close proximity to what was expected.  There were some flaws of videos only viewable in mp4 format from the internet explorer browser.  But I was not able to modify that from a design standpoint.  It was maybe a conflict with the development environment or source coding.  Also there was not enough time for mobile conversion.  Overall the application meets many of the requirements requested are the final results.


High End Computers and Computational Science

The term “high end computer” will have varying definitions.  Most of the creations which fall under that label are between the early traditional or simple pc and very large supercomputers.  In fact, as personal computers became more popular and readily available, the prices also went down.  Then generic or cheaper alternatives were designed as ones with increased performance and capacity began to hit the market.  The blending of the affordable pc with the superior supercomputer provide a machine where the possibilities for what it can do are nearly endless and it can fit in one room of your home.  If a uniform high end computer can be designed, it will involve the computer being expensive and having elite processing capability.  The cost and performance will go hand in hand as processors, memory, graphics cards and hard drive will play directly into both.  To further elaborate, we are talking about processors with multiple cores with speed measured in gigahertz, synchronous dynamic memory with large amounts of gigabytes dedicated to active services and applications, very high capacity graphics cards and solid state drives that can hold terabytes of data [1].  The trend in technology is the new always seems to fade out the old.  Usually the intricate differences are so subtle that only experts of the field can explain what truly separates them.  The latest technological devices come at a higher cost than the previous iterations and the enhanced performance is just part of the package.  These computers are commonly associated with research experiments, gaming and software development.  One more industry which uses high end computers is Computational Science.  Computation Science is a budding field that draws a line to not be confused with Computer Science.  From an article posted in the SpringerPlus journal, the author defines it as “being the application and use of computational knowledge and skills (including programming) in order to aid in the understanding of and make new discoveries in traditional sciences, as opposed to the study of computers and computing per se” [2].  That classification describes how computers can be used in a variety of ways to assist with STEMM projects.  STEMM stands for science, technology, engineering, mathematics and medicine.  Aside from the somewhat cross referencing of technology, computers are primarily used to process large quantities of information, break down and solve complex equations and store relevant records.  Where computer science focuses on the components of hardware and software computational science focuses on the uses for other fields.  The interweaving of high end computing and computational science enables a very formidable tandem which has advances computing many areas. 
            One of the primary reasons for these collaborators, as previously stated, is research.  When research needs to produce results it can be in the form of data visualization.  After gathering so much information, reporting has to be condensed to the facets of the study.  The numbers are broken down in divisions of categorical distinctions.  From the vast total the research can generate representative parts of the sum.  Data visualization is the process of converting the information into models that can be interpreted by other individuals who are not certified experts of the field.  A component to Computational Science can be the talent of reproducing graduate level research to be received at a novice’s perspective.  Though they will certainly be able to understand it as well, it is intended for a larger group.  Data visualization has been known to help both very effectively.  The following excerpt from a high end computing publication give some hints into the procedure.  “The scientific method cycle of today’s terascale simulation applications consists of building experimental apparatus (terascale simulation model), collecting data by running experiments (terascale output of the simulation model), looking at the data (traditional visualization), and finally performing data analyses and analysis-driven visualization to discover, build, or test a new view of scientific reality (scientific discovery) [3].”  The computers assist on each integral step.  The designing of the model and how it will operate is just as important as formally collecting the data.  Computable models are the models that are of prime interest in computational science. A computable model is a computable function, as defined in computability theory, whose result can be compared to data from observations [4]. The model design can be about deciding the actual form of the input and output.  Input can be as simple as inserting numbers into preset forms or multifaceted like providing hard files and having the data parsed.  Then telling the computer what to do with all that evidence is the computational side of the output.  For data visualization, output will be a chart, graph or map.  Researchers can present the details of why the model’s data is isolated to create the image it does. Some of the variations include bar charts, histograms, scatter plot chart, network model, streamgraphs, tree maps, gantt charts and heat maps [5]. Would there be a loss in quality or relatability if a histogram was chosen over a gantt chart for example?  Discuss the details of what validates the choosing of evocative data sets can be very interesting niche for this study. All of these, however, use the power aptitudes of high end computers to create high resolution images with rich colors and niceties as the result for computational science.   These images easily translate pages and pages of raw data to a much more digestible format.  Research that may have taken years and thousands of contributors can be reduced to a two dimensional appearance viewable from one page.  
            Three more examples for applications of this combination is parallel computing, grid computing and distributed computing.  From a computation science perspective, grid computing generates individual reports from locations as the information is fed into a primary resource.  Distributed computing would rather produce a comprehensive report using multiple network node to collect data.  Lastly, parallel computing uses one main source to assist all the others nodes of a network.  Grid computing and distributed computing function in a very similar as the network architecture is almost identical but the end result as well as root cause can be different.  A perfect illustration of this concept is the BOINC program at the University of California, Berkeley.  BOINC is an acronym for Berkeley Open Infrastructure for Network Computing [6].  Its initial release was in the year 2002.  Since then it has cycled through many participants who readily volunteer their computers and services for the research cause of their choice.  And there are many to choose from.  A recent check of the website displayed over 264,000 active volunteers using over 947,000 machines spread out of almost 40 projects.  Projects vary in topics from interstellar research and identifying alien lifeforms to what could be the next steps in advanced medicine.  Each one has volunteers dedicate time and resources to what they have interest in.  Each person can go through the process of completing an agreement form and download software to begin to be a part of it.  They use several high end computer and maybe not-so high end computers across the network to receive data at an alarmingly fast rate.  Systems of this type are measured in floating point operations per second or FLOPs.  That unit of measure would most likely fall under the category of a performance metric [7].  High end computing can be measured by the application, the machine or the combined integration configuring performance metrics.  Instruction sets of the application processed by available sockets and cores of the system during a given clock cycle can lead to a number for how many floating-point operations are conducted.  To be precise the FLOPs amount calculate by the BOINC structure are converted to petaFLOPs.  The peta- prefix denotes a calculation of 1015.  That quantity is possible from the immense shared space and very little idle time or mechanical malfunctions.  That is an enormous number.  To further explain, imagine speed you would have to move at to do a task a million times in one billionth of a second.  When the data collection is time sensitive errors can arise from user authentication and system authorization.  People gaining access to the network and the network having access to the computer can be viewed as human error if not completed correctly.  Another area to ensure is security to prevent interception or modification of information as it is transmitted.  Data needs to be protected as it is passed from node to node.  In many of the projects the retrieved data is geo-specific, any misrepresentation or alteration of the records can seriously corrupt the reporting of the final statistics.  Encryption and decryption can play a major role the security of the project.  There are countless methods to do this but it will most likely ensure a way for the message to be encoded leaving the home location and decoded only once it reaches its intended destination.  Accuracy is critical to enterprise level operations of this field. 
            Computational Science and High End Computing are the proverbial match made in heaven.  Together there is a give and give sort of relationship to where the prospects of each are enhanced by the other.  They are not completely inseparable however.  Gaming is a major industry for high end computing.  The frame rates of today’s video games are only possible with certain machine and graphic cards.  The minimum requirements for games and application are sometimes spoken about beforehand.  Computations can be done by human beings given sufficient amount of time.  Computers have not nearly been around as long as science and math.  People of these fields have performed in them for centuries.  The majority of advancements that reach the mainstream begin from a human proposed thesis and assisted with, not dependent upon, technology.  But the merger is what allows for the maximizing of effectiveness and time spent.  The combination has expedited and enhanced research in several fields.  It has also been able to broaden the possibilities of what can be done.  Results can be recreated into graphical representation to discuss.  The data conversion provides a visual to better comprehend these often large sets of raw numbers.  In conclusion, as a student of Computer Science I think that the one of the most astonishing feats may just be the idea of the pair growing into its own genre and not within the borders of the CompSci discipline.  I am equally impressed by the component materials needed to build and modify a high end computer as its usage for mathematical and scientific applications.  Hopefully both will continue to flourish in the future with their ingenuity and popularity.  And the next evolution maybe just around the corner.

References
[1] Origin PC Corporation. https://www.originpc.com/gaming/desktops/genesis/#spec2
[2] McDonagh, J, Barker, D. and Alderson, R. G.. Bringing Computation Science To The Public. SpringerPlus. 2016.
[3] Ostrouchov, G. and Samatova, N. F. High End Computing for Full-Context Analysis and Visualization: when the experiment is done. 2013
[4] Hinsen, K. Computational science: shifting the focus from tools to models.  F1000Research.  2014.
[5] Data Visualization. https://en.wikipedia.org/wiki/Data_visualization.
[6] http://boinc.berkeley.edu/
[7] Vetter, J., Windus, T., and Gorda, B.  Performance Metrics for High End Computing.  2003.

Report on Scala

                Scala is a program language created by Mr. Martin Odersky and others.  It is intended to be an elegant blend between object-oriented design and functional programming.  There is a belief within its ranks that every function is a value and every value is an object.  Development began on the language in 2001 followed by the initial release in 2003.  After an attempt to improve Java, the project spun off into its own formulated specifically for component style software engineering.  The data types of the language are common.  Numbers can be in the form of doubles, floats, longs, ints, shorts and bytes.  Strings, long form sequence of alphanumeric or Unicode characters, are from the Java library.  There are also individual characters as chars and true/false values as Booleans.  There is some uniqueness as units, iterables, maps, options, sets and lists are added to the mix of instantiable data types.  Even “empty” sets are possible with nothing and null objects.  Scala has five primary keywords for creation as class, object, def, val and var are needed to identify user definitions.  Classes are the design for what an object can be while objects are just a single instance.  Another keyword new is used to convert the class to a created object.  The main method to run an application is customarily contained in a user defined object.  The def keyword is used to create functions.  The keyword is followed by the name, the parameters in parentheses, a colon, the return type, an assignment operator and the set of instructions contained in brackets.  Val signifies a place holder where the assigned value will not be changed while var can be modified later in usage.  The syntax calls for either the val or var keyword followed by the name then colon, datatype and finally with what is being assigned to it.  Scala also has user defined types.  The keyword type is placed before the name and then you assign a predefined type with the assignment operator.  As file organization goes, Scala is very similar to Java in the matter of naming convention.  Also like Java, Scala uses the keywords package and import to define project scope and add external files respectively.  Scala features two forms of generics.  One is the traditional abstract class where you provide a simple framework to be used later.  Very analogous to abstract classes is the concept of traits.  Traits also allow variables and methods that are inheritable to another class.  The inheritance occurs with the keyword extends.  The capability of concurrent processing is very heavily associated with the ‘java.util.concurrent’ package.  There are two points to this topic.  The first is having two library methods of callable which returns a value and runnable that does not.  Then you will have to look at various of threading possible with synchronous and asynchronous tasks.  Where Scala shows promise is in how vast and dynamic it can be.  The possibility to implement nested functions is another positive addition.  For me, a drawback is how closely relatable it is to Java and running on a Java virtual machine. 
                The difference between Scala and C would be akin to comparing Java and C.  Start with the time period to get a better understanding.  C was created in 1972 and Scala in 2001.  C is the basis for many programming language concepts since its inception.  Scala is a fairly newer language and is heavily dependent on Java.  Much of the grunt work that developer had to do in C has been made much easier within built-in libraries for Scala.  The major difference you could point to would be the same in comparing other languages.  All languages will have a particular syntax for committing common tasks.   The level of complexity is dependent upon the features included into the language but again they maybe more time dependent as concepts were created by pioneers and deciphered to be made simpler for users to come later down the line.  To complete project one in Scala would be helped by the automatic sorting array feature but much of the rest would be similar.
References
[1] http://www.scala-lang.org/index.html
[2] Odersky, M. et al. An Overview of the Scala Programming Language.  EPFL Technical Report. 2nd Edition. 2004.
[3] Odersky, M. Scala By Example. 2014.
[4] Odersky, M. A Brief History of Scala. http://www.artima.com/weblogs/viewpost.jsp?thread=163733. 2006.
[5] Venners, B. and Sommers, F.  The Origins of Scala. http://www.artima.com/scalazine/articles/origins_of_scala.html. 2009.
[6] https://www.tutorialspoint.com/scala/index.htm
[7] Concurrency in Scala. https://twitter.github.io/scala_school/concurrency.html