No offense, but I’m not addicted to representative measures. In some areas, I am more than happy to have them. Like when talking about the balance of my checking and savings accounts. In that case, I’d like representative measures, to the nearest cent. But I don’t need representative measures 100 percent of the time. On the contrary, in some areas, I strongly need non-representative measures to provide me with some efficient guidance.
Third-generation programming languages (3GL) like COBOL or PL/1 are seen as outdated languages for “has-been” developers, and do not interest new ones anymore (there were even predictions saying that COBOL was going to die in mid-term.) These new developers prefer more modern technologies, like J2EE or .NET, and, worryingly, educational organizations provide few learning opportunities for 3GLs.
When you think about PHP, it is often associated with small applications made by passionate developers for their personal use — generally websites with low database usage and/or few visitors. Well, how wrong we are! PHP is used for a large panel of applications that generate a lot of traffic, for example public administrations or big companies. These entities require their applications maintain high scalability, availability, and, of course, no drop of performance. It’s no wonder that performance and speed are very popular quality goals when it comes to PHP.
When every product has the same features, the only way to make a difference in the jungle that is today’s software ecosystem is by having the one that performs best. Of course, in this article, by product, we mean application and its code. For .Net applications, this is truer than ever. Here are ten tips that can greatly improve the performance of your .Net application
False positives are unavoidable and appear in every software application measurement system, with more or less importance. There are several causes to that situation. First, the more we search for information, the higher the risk of false positives. Second, the more complex the information is to search, the higher the risk of errors. And third, the less sophisticated the technique used to scan the code, the higher the risk of having bad results. In this last case, the different techniques commonly used varied from a simple grep search to syntax-based parsing, semantic resolution, and dataflow analysis. However, the situation can be seen following two opposite points of view: a negative … Read More
Static code analysis is used more and more frequently to improve application software quality. Management and development teams put specific processes in place to scan the source code (automatically or not) and control the architecture of the applications they are in charge of. Multiple analyzers are deployed to parse the files that are involved in application implementation and configuration, and they generate results like lists of violations, ranking indexes, quality grades, and health factors. Based on the information that is presented in dedicated tools like dashboards or code viewers, managers and team leaders can then decide which problems must be solved and the way the work has to be done. … Read More
Modern Integrated development environments (IDEs) are equipped with more and more tools to help developers code faster and better. Among these are plug-ins that allow developers to scan the source code for error-prone constructs, dangerous or deprecated statements, or practices that should be avoided. IDEs come in a variety of flavors — both free and commercial — but in all cases, developers can install them to improve the quality of the code they produce. Some organizations encourage their developers to explore and deploy such tools, but as any good app developer knows, there is a difference between installing an app and using it consistently. Installing a tool is one thing, … Read More