top of page

Hechos de Poder

Public·22 members

Crack [PORTABLE]ed Usenet.nl Account Generator



Usenet premium link generator is a cloud storage service which offers two accounts. It includes free as well as the premium account as well. You need a premium account to get access to all the features which are not available in the free version. It includes an unlimited number of uploads, high downloading speed and so on. Therefore, you need to use a premium account and I also suggest it. It is because it ensures your privacy and secure too.




Cracked Usenet.nl Account Generator



Abstract:We explain what the 500 language problem is, why it is a relevantproblem, and why solutions are needed. We propose a solution, which israpid development of renovation parsers by stealing grammars. Weillustrate this by applying this approach to two non-trivial butrepresentative languages: a proprietary real-time language from thetelecommunications industry, and a well-known dialect of the mostpopular language in the world: IBM's VS Cobol II. We share thelessons we learned with our efforts to solve the 500 language problem. IntroductionCapers Jones estimates that there are at least 500 languages anddialects available in commercial form or in the public domain. On topof that, he estimates that some 200 proprietary languages have beendeveloped by corporations for their own use [1, p. 321]. Inhis book on estimating the costs of the Year 2000 problem [2]he furthermore indicated that systems written in all those 500 plus 200languages were affected. The findings of Jones inspired manyY2K whistle-blowers to mention his estimates as a major impediment tosolve the Year 2000 problem. Let us have a look at what these peoplehad to say. For instance, Ed Yourdon replied with a boilerplate emailwhen you send him mail containing the words: Y2K and solution. Hementions the 500 language problem, and it is worthwhile to quote thispart in its entirety:I recognize that there is always a chance that someone will come up witha brilliant solution that everyone else has overlooked, but at this latedate, I think it's highly unlikely. In particular, I think the chances ofa ``silver bullet'' solution that will solve ALL y2k problems is virtuallyzero. If you think you have such a solution, I have two words for you:embedded systems. If that's not enough, I have three words for you:500 programming languages. The immense variety of programming languages(yes, there really are 500!), hardware platforms, operating systems,and environmental conditions virtually eliminates any chance of a singletool, method, or technique being universally applicable. The number 500 should be taken poetically, like the 1000 in thepreserving process for so-called thousand-year-old eggs, which lastonly 100 days. For a start, the 200 proprietary languages should beadded, moreover other estimates indicate that 700 is ratherconservative: Weinberg estimated already in 1971 that in 1972programming languages will be invented at the rate of one per week--ormore, if we consider the ones which never make it to the literature,and enormously more if we consider dialects, too [3, p. 242].Also Peter de Jager created awareness for the 500 language problem. Hewrites about the availability of Y2K tools [4]:There are close to 500 programming languages used to develop applications.Most of these conversion or inventory tools are directed toward a verysmall subset of those 500 languages. A majority of the tools are focusedon Cobol, the most popular business programming language in the world.Very few tools, if any have been designed to help in the area of APL orJOVIAL for example. If everyone was using Cobol, and only a few systems were written inuncommon languages, the 500 language problem is of limited importance.Therefore, it is useful to know what the actual languagedistribution of installed software is. First, there are about 300Cobol dialects: each compiler product has a few versions, with manypatch levels, Cobol often contains embedded languages like DMS, DML,CICS and SQL. So there is no such thing as the Cobol language.Cobol is a polyglot: a confusing mixture of dialects and embeddedlanguages--a 500 language problem of its own. Second, Yourdon and deJager were right about the importance of the 500 language problem.Namely, 40% of all the software is written in less common languages. Tobe precise, the distribution of the world's installed software bylanguage according to Capers Jones is as follows:Cobol: 30% (225 billion LOC)C/C++: 20% (180 billion LOC)Assembler: 10% (140-220 billion LOC)less common languages: 40% (280 billion LOC) In contrast, for about 50 languages Y2K search engines existed, and forabout 10 languages there were automated repairengines [2, p. 325]. So only for a very small part of thelanguages there is automated modification support. This lack causedpeople to worry about the 500 language problem.What is the 500 Language Problem?What is it, and is it still relevant? We entered the new millenniumwithout too much trouble so you one could conclude that maybe there wasthis 500 language problem, but whatever it was, it is not relevantanymore. Of course the 500 language problem already existed before itwas popularized by the Y2K gurus, and it did not go away when weentered the new millennium. Capers Jones identified and namedthe problem. But what is a good description of this problem? Here's asuccinct formulation:The 500 language problem is defined as the most prominentimpediment for constructing tools to analyze and modify existingsoftware written in those languages. Removing this impediment solves the 500 language problem. Weillustrate what this impediment comprises. If you want tools toaccurately probe and manipulate source code, a prerequisite for suchtools is that the code has to be converted from text format into a treeformat. To make this conversion you need a so-called syntacticanalyzer, or a parser. Constructing a parser for analysis ormodification is a major effort, and in many cases the up-frontinvestment is hampering initiatives for commercial tool builders, whichclarifies the lack. Indeed, Tom McCabe told us that McCabe &Associates developed parsers for 23 languages, which was already a hugeinvestment. But 500 would be insurmountable, therefore, he dubbed the500 language problem ``the number one problem in softwarerenovation''.A sometimes heard solution for the 500 language problem is to justconvert from uncommon languages to mainstream ones for which toolsupport is available. Eliminating all these languages will make the500 language problem go away. This is not a solution. For, you need afull-blown tool-suite to make the conversion, including a seriousparser. And obtaining a parser is part of the 500 language problem.So language conversion will not eliminate the 500 languageproblem, on the contrary, you need a solution for the 500 languageproblem to aid in solving conversion problems.A second suggestion to solve the 500 language problem is reported onin Usenet discussions where a researcher proposed to generate grammarsfrom the source code only, in the same way linguists try to generate agrammar from a piece of natural language. In search of solutions, westudied this idea and consulted the relevant literature. We did notfind any successful effort where the linguistic approach helped tocreate a grammar for a parser in a cost-effective way. Our conclusionis that the linguistic approach does not lead to useful grammarinferences from which you can built parsers [5].Another suggestion to solve the 500 language problem is to reuse theparser from compilers. This solution works already a bit better: youjust tap the parser output from a compiler and feed it to arenovation tool. In fact, this is what Prem Devanbu is doing with hisGENOA/GENII system [6]. He developed a programmable tool thatcan turn a certain idiosyncratic output format of a parser into anotherformat that is more suitable for code analysis. There is however, onemajor drawback to this approach: as Devanbu points out in hispaper [6] the GENOA system does not allow for modifyingcode. This is not a surprise, since a compiler removes comments,expands macros, includes files, minimizes syntax, and thus irreversiblydeforms the original source code. The intermediate format is goodenough for analysis in some cases, but the code can never be turnedinto acceptable text format again. Another very real limitation isthat for many compilers you do not get access to its sources (foreconomic reasons). In renovation projects, such as Y2K, Euro,code restructuring, language conversion, and so on it is a requirementthat you can automatically modify code. Namely, the code volume isprohibiting effective and efficient renovation by hand.Summarizing, the availability of grammars is very sparse, and solutionsto obtain them are far from optimal. The 500 language problem is areal problem, it was a real problem before the Y2K gurus baptized it,and it did not go away after the millennium passed. Its solution is afirst step in the direction of enabling tool support for analyzing andmodifying our 800-900 billion LOC of existing software assets writtenin numerous languages. How We Are Cracking the 500 Language ProblemEd Yourdon claimed that the large number of programming languages wouldvirtually eliminate any chance of a single tool, method, or techniquebeing universally applicable. It turns out that the 500 languageproblem does have a single solution. And it is not too hard either.This is what we mean with the word solution:The 500 language problem is cracked when there is a cheap, rapid andreliable method to produce grammars for the myriads languages so thatanalysis modification of existing code is enabled. We explain what this all means. Cheap is in the 25.000 5.000 USdollar range, rapid is in the 2 weeks range (one person), and reliableis that the parser based on the produced grammar passes the test ofparsing millions of lines of code provided by the customer in need fortool support. Why is this a solution? For, a grammar is hardly a Euroconversion tool, or a Y2K analyzer. Next we explain that the mostdominating factor of constructing renovation tools is constructing theunderlying parser.From Grammar to Renovation ToolRenovation tools routinely comprise the following main components:preprocessors, parsers, analyzers, transformers, visualizers, prettyprinters, and postprocessors. In many cases, language-parametrized (orgeneric) tools are available to construct these components. Think ofparser generators, pretty printer generators, graph visualizationpackages, rewrite engines, generic data flow analyzers, and the like.Workbenches providing this functionality are for instance, Elegant,Refine, ASF+SDF, but there are many more. This is the genericcore of all renovation tools. In Figure 1 we depicted howyou go from a grammar to actual renovation tools. Figure 1:Effort shift for renovation tool developmentWe expressed effort by the length of arrows (longer arrows imply moreeffort). As you can see, if you have a generic core, and a grammar, itdoes not take too much effort to construct parsers, tree walkers,pretty printers, and so on. Although these components depend on aparticular language, their implementation uses generic languagetechnology: a parser is generated using a parser generator, a prettyprinter is generated using a formatter generator [7], andlikewise, tree walkers for analysis or modification are generatedsimilarly [8]. What all these generators share, is that theyheavily rely on the grammar. Once you have the grammar and therelevant generators you can rapidly set up this core for developingsoftware renovation tools. You could call this thegrammar-centric approach. Leading Y2K companies indeedconstructed generic Y2K analyzers so that dealing with a new languageideally reduced to constructing a parser. The bottleneck is obtainingcomplete and correct grammar specifications. The dashed part inFigure 1 expresses the current situation: it takes a lot ofeffort to create those grammars. In Table 1 we quantify theeffort for a typical Cobol renovation project using the solution wepropose. We discuss the project shortly, but first notice that thegrammar part took two weeks. Implementing a quality Cobol parser cantake 2 to 3 years, as Vadim Maslov of Siber Systems posted on Usenet(he constructed Cobol parsers for about 16 dialects). Also adapting anexisting Cobol parser to cope with new dialects takes easily 3 to 5months as we learned from several estimates done by others. Moreover,patching existing grammars using mainstream parser technology leads tounmaintainable grammars [9,10] significantly increasingthe time it takes to adapt parsers. Using our approach this effort isreduced significantly (in this example to 2 weeks of effort), so thatyou can much more quickly start developing actual renovation tools.To illustrate how to go from a grammar to an actual renovation task, webriefly describe this Cobol renovation project [11] where othersapplied our grammar-centric approach. This project concerned one ofthe largest financial enterprises in the world. They needed anautomatic converter from Cobol 85 back to Cobol 74 (the 8574 project).The Cobol 85 code was machine generated from a 4GL tool (KEY) so theproblem to convert back was fortunately limited, due to the limitedvocabulary of the code generator. It took some time to find solutionsfor intricate problems such as, how to simulate Cobol 85 features likeexplicit scope terminators (END-IF, END-ADD), or to expressthe INITIALIZE statement in the less rich Cobol 74 dialect. Thesolutions were discussed with the customers and tested forequivalence. Once these problems were solved, it was not much work toimplement the components due to the generic core assets generated fromthe recovered Cobol 85 grammar. The problem could be cut in 6 separatetools taking 5 days to implement. The programming by hand was limited(less than 500 LOC); but compiled into about 100.000 lines of C codeand 5000 lines of makefile code (linking in all the generated genericrenovation functionality). After compilation to 6 executables (2.6 Mbeach), it took 25 lines of code to coordinate them into a distributedcomponent-based software renovation factory that converts Cobol 85 codeat a rate of 500.000 LOC/hour back to Cobol 74 using 11 SunWorkstations. Table 1:Effort for the 8574 project. 8574 projecteffortgrammar2 weeksgeneration1 day6 tools5 daysassemblage1 hourtotal3 weeks Measuring this and other projects, it became clear to us that the totaleffort of writing a grammar by hand is orders of magnitude larger thanconstructing the renovation tools themselves. So the most dominatingfactor in producing renovation tools is constructing the parser.Building parsers using our approach reduces the effort to the sameorder of magnitude as constructing the renovation tools. Buildingparsers in turn is not hard: use a parser generator. But the input ofa parser generator is a grammar description. So the most importantartifacts that we need to enable tool-support for software renovationare complete and correct grammars. When we find an effective solutionfor producing grammars quickly for many languages we solve the largestimpediment to construct tools for those languages, and we thus solvethe 500 language problem.But how to produce grammars quickly? Recapturing the syntax of anexisting language is usually done by hand: take a huge amount ofsources, manuals, books, and a parser generator and start working. We,and many others, have worked like this for years. But then we realizedthat this hand-work is not necessary. Since we are dealing withexisting languages, the grammars are already constructed. This is whatwe discovered about grammars:do not create them, steal them, and then massage them to your needs. Grammar Stealing Covers Almost All LanguagesWe commence with an important argument showing that our approach coversvirtually all languages: we found two actual problematic cases. Wediscuss the coverage diagram depicted in Figure 2. Figure 2:Coverage diagram for grammar stealingRecall that we need to produce grammars for existing software,e.g., legacy systems. So the deployed software is compilable (or canbe interpreted). After passing the start here box, we enterthe compiler sources diamond. There are two possibilities:either the source code of the compiler is available to you or not.First we discuss the yes path. Then the only thing you have todo is find the part that turns the text into an intermediate form.That part now contains the grammar. This you can do to grep thecompiler sources for keywords of the language.There are three possibilities: either the grammar part is hard-coded,or a parser generator is used, or both (in a complex multi-languagecompiler for instance). We only need to cover the first two cases(they are present in the diagram). In the hard-coded case, you have toreverse engineer the actual grammar from the hand-written code.Fortunately, in the comments of such code often BNF rules are providedgiving you an indication what the grammar comprises. Moreover,compiler construction is a well-understood subject: there is even areference architecture known. Therefore, compilers are oftenimplemented with well-known implementation algorithms. So usually thequality of a hard-coded parser is good, e.g., a recursive descentalgorithm is used. In such cases you can easily recover the grammareither from the code, or the comments, or both. In one case we knowthat the grammar is not easily extractable: this is the case for thelanguage perl [12]. In all the other cases weencountered, the quality of the code was always sufficient to recoverthe grammar.If the parser is not hard-coded, it is generated (the BNF branch inFigure 2). But then there must be some BNF description ofit in the compiler sources. With a simple tool that parses the BNFitself you can extract the BNF. So in all cases when we have thecompiler sources we can recover the grammar, except for perl.This finishes the case when you have access to the source code of acompiler. Later on we discuss a published recovery case from compilersources to give you an idea of grammar stealing in case the compilersources are to your avail.Now we look at the case were there is no access to the compiler sources(we enter the language reference manual diamond inFigure 2). In that case there are two possibilities:there is a language reference manual or not. Let us first discuss thecase that a language reference manual is available. This can be acompiler vendor manual or an official language standard. There arethree possibilities: either the language is explained by example, orusing general rules, or both. We only need to treat the first twocases. Let us first assume that there are general rules. Then thereis the quality issue. Reference manuals and language standards areknown to be full of errors. To our surprise, we discovered that themyriads of errors are of a repairable category. We were surprisedsince we experienced a total failure to recover a grammar from a manualin 1998, a proprietary language for which--obviously--also thecompiler sources were available (so this case is covered with theupper-half of Figure 2). As you can see in the coveragediagram we have not found low quality language reference manualscontaining general rules for cases where we did not have accessto compiler sources. This is clarified as follows: compiler vendorsdo not give away the source code of a compiler for economic reasons.But in order to be successful as a company accurate documentationexplaining the entire language is necessary. We discovered that thequality level of those manuals is good enough to recover the grammar.Later on we discuss a published recovery case from a language referencemanual to give you an idea of grammar stealing in case there are nocompiler sources are to your avail.For an uncommon language it is much more rare to have


About

Dios los bendiga y gracias por visitar nuestra página web! S...
bottom of page