Software Development
In the early days of computers, the software development process evolved fairly quickly so that by the late 1960s third-generation (3GL or high-level) programming languages such as Fortran and COBOL were in widespread use. Subsequently, many other languages have been designed but none in widespread use have really broken away from the programming model in use at that time. Of course, there have been advances, for example, object-orientated languages such as C++, Java and Objective C can facilitate the re-use of existing software, but they don't really change the traditional approach. Indeed the continued and apparently endless invention of newer languages e.g. C#, Perl, Ruby, Python, PHP, D, R, Swift etc. etc. illustrates the dissatisfaction with existing languages and the constant striving for something better.
It is amazing to see that more than 50 years after the development of high-level languages new ones are still being devised at an astonishing rate. The sheer number of different languages can be seen on this site that ranks languages by popularity: Programming Language Rankings (Wired.com). The surprising thing is that few people appear to question whether computer programming languages could be part of the problem rather than the solution.
The 'traditional approach' referred to here involves the writing of software in a programming language which is held in one or more source-code files. In the early days the source-code files would be held on paper tape, punched card or magnetic tape; today they are typically held on disk or solid-state drives but are essentially unchanged in that they are text files read a line at a time by the compiler which then produces an object file from the information in the source code. A number of object files are combined together by a linker to produce an executable file (or program) and one or more executable files constitute an application (or app).
The essential point to appreciate is that by far the most important factor that determined the traditional approach to software development was the extreme memory constraints imposed by computers in use at that time. A typical device today has tens of thousands of times more memory than was possible in systems in the 1950/60s. An absolutely top priority of the design of any general-purpose compiler/linker system then was to minimise the size of both the executable files and the compiler and linker themselves; otherwise, they just would not fit into memory.
From a practical viewpoint, a compiler can be considered as a filter that throws away as much information from the source files as possible to achieve as small an executable file as is practical. This process results in something that is inherently unchangeable and which cannot tell other programs or other users what it does or what information it processes.
It probably seems amazing to most computer users but the fact is that the technology used to write the vast majority of software in use today exists in the form it does for reasons that haven't really been relevant for decades and uses a fundamental approach that has hardly changed in all that time.
The approach followed for the development of Adapt has been to design a new software architecture that fully exploits the capabilities of today's systems and the high degree of connectivity provided by the Internet. The resulting architecture enables all the information input by the application designer to be held in a form that may be viewed, analysed and modified at any time resulting in immediate changes to the appearance and behaviour of the application.
It is an important feature of Adapt that no programming language and source files are involved in the definition of an application. The designer may view all the features and characteristics of an application at any time and analyse the ways that different features interact. It is always possible to examine the operation/calculations that may be performed at different times and the situations in which they may occur. When making changes the designer is shown the possible options at that point and is able to see the effects of any changes. The design information is held in a structured form and is never converted 'behinds the scenes' to program source files; this means that it is always available to be analysed and to generate visual descriptions of the features of the application.
Such visual descriptions are used to explain what information the application holds and the operations that may be performed upon it in various circumstances. The ability of an application to explain what it does makes the task of the designer very much easier and also helps a user to understand what the application does and how to use it. As changes are made then the visual descriptions of the application automatically change so that they are always up to date.
The wealth of design information available means that it will be possible to use VR headsets to 'walk around' inside a 3D representation of the application hierarchy. This will show interactions/dependencies between the apps and between the dAtas within each app. The designer will be able to examine/modify these in detail and view/analyse the attached Logics.