dcblogs writes "U.S. government contracts often require bidders to have achieved some level of Capability Maturity Model Integration (CMMI). CMMI arose some 25 years ago via the backing of the Department of Defense and the Software Engineering Institute at Carnegie Mellon University. It operated as a federally funded research and development center until a year ago, when CMMI's product responsibility was shifted to a private, profit-making LLC, the CMMI Institute. The Institute is now owned by Carnegie Mellon. Given that the CMMI Institute is now a self-supporting firm, any requirement that companies be certified by it — and spend the money needed to do so — raises a natural question. 'Why is the government mandating that you support a for-profit company?' said Henry Friedman, the CEO of IR Technologies, a company that develops logistics defense related software and uses CMMI. The value of a certification is subject to debate. To what extent does a CMMI certification determine a successful project outcome? CGI Federal, the lead contractor at Healthcare.gov, is a veritable black belt in software development. In 2012, it achieved the highest possible Capability Maturity Model Integration (CMMI) level for development certification, only the 10th company in the U.S. to do so."
Become a fan of Slashdot on Facebook
CowboyRobot writes "Andrew Koenig at Dr. Dobb's argues that by looking at a program's structure — as opposed to only looking at output — we can sometimes predict circumstances in which it is particularly likely to fail. 'For example, any time a program decides to use one or two (or more) algorithms depending on an aspect of its input such as size, we should verify that it works properly as close as possible to the decision boundary on both sides. I've seen quite a few programs that impose arbitrary length limits on, say, the size of an input line or the length of a name. I've also seen far too many such programs that fail when they are presented with input that fits the limit exactly, or is one greater (or less) than the limit. If you know by inspecting the code what those limits are, it is much easier to test for cases near the limits.'"
An anonymous reader writes "Last year, Jordan Mechner, the creator of the Prince of Persia video game franchise, released the long-thought-lost original Apple II source code for Prince of Persia. Today marks the release of version 2.0 of apoplexy, the free and open-source level editor of Prince of Persia for DOS. Roughly 5.5 years after its initial release, support has been added for editing Prince of Persia 2 levels in both GNU/Linux and Windows. The game has its 25th anniversary next year, but the original trilogy only has a (very) small fan community. Will old games such as this also interest future generations or will they gradually lose their appeal because of technological advances?"
theodp writes "In 1919, Nora Bayes sang, "How ya gonna keep 'em down on the farm after they've seen Paree?" In 2013, discussing User Culture Versus Programmer Culture, CS Prof Philip Guo poses a similar question: 'How ya gonna get 'em down on UNIX after they've seen Spotify?' Convincing students from user culture to toss aside decades of advances in graphical user interfaces for a UNIX command line is a tough sell, Guo notes, and one that's made even more difficult when the instructors feel the advantages are self-evident. 'Just waving their arms and shouting "because, because UNIX!!!" isn't going to cut it,' he advises. Guo's tips for success? 'You need to gently introduce students to why these tools will eventually make them more productive in the long run,' Guo suggests, 'even though there is a steep learning curve at the outset. Start slow, be supportive along the way, and don't disparage the GUI-based tools that they are accustomed to using, no matter how limited you think those tools are. Bridge the two cultures.'" Required reading.
An anonymous reader writes "With the release of SteamOS, developing video game engines for Linux is a subject with increasing interest. This article is an initiation guide on the tools used to develop games, and it discusses the pros and cons of Linux as a platform for developing game engines. It goes over OpenGL and drivers, CPU and GPU profiling, compilers, build systems, IDEs, debuggers, platform abstraction layers and other tools."
Today marks the release of Ruby version 2.1.0. A brief list of changes since 2.0.0 has been posted, and file downloads are available. Here are some of the changes:
- Now the default values of keyword arguments can be omitted. Those 'required keyword arguments" need giving explicitly at the call time.
- Added suffixes for integer and float literals: 'r', 'i', and 'ri'.
- def-expr now returns the symbol of its name instead of nil.
- rb_profile_frames() added. Provides low-cost access to the current ruby stack for callstack profiling.
- introduced the generational GC a.k.a RGenGC (PDF).
An anonymous reader writes "A recent paper from Georgia Tech (abstract, paper itself) describes a system than can run the complete TPC-H benchmark suite on an NVIDIA Titan card, at a 7x speedup over a commercial database running on a 32-core Amazon EC2 node, and a 68x speedup over a single core Xeon. A previous story described an MIT project that achieved similar speedups. There has been a steady trickle of work on GPU-accelerated database systems for several years, but it doesn't seem like any code has made it into Open Source databases like MonetDB, MySQL, CouchDB, etc. Why not? Many queries that I write are simpler than TPC-H, so what's holding them back?"
New submitter John Moses writes "I have been working with node.js a lot lately, and have been discussing with co-workers if node.js is taking steam away from Ruby at all. I think the popularity of the language is an important talking point when selecting a language and framework for a new project. A graph on the release date of gems over time could help determine an answer. The front page of RubyGems only shows data on the most popular, but I am really interested in seeing recent activity. My theory is that if developers' contributions to different gems is slowing down, then so is the popularity of the language."
theodp writes "What's wrong with this picture?" asked Code.org at its launch earlier this year, lamenting the lack of Computer Science students in a race and gender reference-free infographic. But as the organization has grown via public/private partnerships and inked agreements to drive the CS curriculum for the Chicago and NYC school systems, the same stats webpage has adopted a new gender and racial equity focus, positioning Computer Science education as "a chance to level the playing field" for women, Hispanic and African American students. The new message is consistent with the recently-forged Code.org partnership with the NSF-funded Exploring Computer Science (ECS, "a K-12/university partnership committed to democratizing computer science") and Computer Science Principles (CSP, "a new course under development that seeks to broaden participation in computing and computer science"). According to The Research Behind ECS, an "insidious 'virtual segregation' that maintains inequality" is to blame for keeping the number of African Americans and Latino/as CS students disproportionately low. So, what might the future of Code.org's proposed equity-based U.S. K-12 CS education look like? "Including culturally relevant instructional materials represented a driving focus of our course development," explained ECS Team members who now advise Code.org. "Cultural design tools encourage students to artistically express computing design concepts from Latino/a, African American, or Native American history as well as cultural activities in dance, skateboarding, graffiti art, and more. These types of lessons are important for students to build personal relationships with computer science concepts and applications – an important process for discovering the relevance of computer science for their own life." And — ironically for Code.org — it could mean less coding."
An anonymous reader writes: "I hope there are a few open source developers on Slashdot who understand this. As a developer who works alone and remotely (while living with my own family) — and is schizophrenic — there would be times I would feel very high (a surge of uncontrollable thoughts), or low because of the kind of failures that some patients with mental illness would have, and because of the emotional difficulty of being physically alone for 8 hours a day. This led me to decide to work physically together with my co-workers. Have you been in this situation before? If you have, how well did you manage it? (Medications are a part of the therapy as well.)"
davecb writes "The Obamacare sign-up site was a classic example of managers saying 'not invented here' and doing everything wrong, as described in Poul-Henning Kamp's Center Wheel for Success, at ACM Queue." It's not just a knock on the health-care finance site, though: "We are quick to dismiss these types of failures as politicians asking for the wrong systems and incompetent and/or greedy companies being happy to oblige. While that may be part of the explanation, it is hardly sufficient. ... [New technologies] allow us to make much bigger projects, but the actual success/failure rate seems to be pretty much the same."
jones_supa writes "A month ago there was worry about Kdenlive main developer being missing. Good news guys, Jean-Baptiste Mardelle has been finally reached and is doing fine. In a new mailing list post by Vincent Pinon, he says he managed to find Mardelle's phone number and contacted the longtime KDE developer. It was found out that Mardelle took a break over the summer but then lost motivation in Kdenlive under the burden of the ongoing refactoring of the code. Pinon agreed that there are 'so many things to redo almost from scratch just to get the 'old' functionalities'. The full story can be read from the kdenlive-devel mailing list. After talking with Jean-Baptiste, Vincent has called upon individual developers interested in Kdenlive to come forward. Among the actions called for is putting the Git master code-base back in order, ensuring the code is in good quality, provide new communication about the project, integrate new features like GPU-powered effects and a Qt5 port, and progressively integrate the new Kdenlive design."
itwbennett writes "A new IDC study has found that 'of the 18.5 million software developers in the world, about 7.5 million — roughly 40 percent — are so-called hobbyist developers,' which by IDC's definition is 'someone who spends 10 hours a month or more writing computer or mobile device programs, even though they are not paid primarily to be a programmer.' Lumped into this group are students, people hoping to strike it rich with mobile apps, and people who code on the job but aren't counted among the developer ranks."
KDE Community writes "The KDE Community is proud to announce the latest major updates to KDE software delivering new features and fixes. With Plasma Workspaces and the KDE Platform frozen and receiving only long term supportt, those teams are focusing on the technical transition to Frameworks 5. This release marks substantial improvements in the KDE PIM stack, giving much better performance and many new features. Kate added new features including initial Vim-macro support, and games and educational applications bring a variety of new features. The announcement for the KDE Applications 4.12 has more information. This release of KDE Platform 4.12 only includes bugfixes and minor optimizations and features. About 20 bugfixes as well as several optimizations have been made to various subsystems. A technology preview of the Next Generation KDE Platform, named KDE Frameworks 5, is coming this month."
Nerval's Lobster writes "A compiler can take your C++ loops and create vectorized assembly code for you. It's obviously important that you RTFM and fully understand compiler options (especially since the defaults may not be what you want or think you're getting), but even then, do you trust that the compiler is generating the best code for you? Developer and editor Jeff Cogswell compares the g++ and Intel compilers when it comes to generating vectorized code, building off a previous test that examined the g++ compiler's vectorization abilities, and comes to some definite conclusions. 'The g++ compiler did well up against the Intel compiler,' he wrote. 'I was troubled by how different the generated assembly code was between the 4.7 and 4.8.1 compilers—not just with the vectorization but throughout the code.' Do you agree?"
hessian writes "According to a news release from Purdue University, 'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption. "The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. ... The nature of these computations is different from the traditional computations where you need a precise answer."' What's interesting here is that this is how our brains work."
Hugh Pickens DOT Com writes "Chuong Nguyen reports that Apple is forcing developers to adopt iOS 7's visual UI for their apps, and has advised iOS developers that all apps submitted after February 1, 2014 must be optimized for iOS 7 and built using Xcode 5 ... 'It's likely that Apple is more anxious than ever for developers to update their apps to fit in visually and mechanically with iOS 7, as it's the largest change in the history of Apple's mobile software,' says Matthew Panzarino. 'iOS 7 introduced a much more complex physical language while stripping out many of the visual cues that developers had relied on to instruct users. For better or worse, this has created a new aesthetic that many un-updated apps did not reflect.' Most app developers have been building apps optimized towards iOS 7 since Apple's World Wide Developer Conference in June 2013. Apple has been on a push over the past couple of years to encourage developers to support the latest editions of its OS faster than ever. To do this, it's made a habit of pointing out the adoption rates of new versions of iOS, which are extremely high. Nearly every event mentions iOS 7 adoption, which now tops 76% of all iOS users, and Apple publishes current statistics. In order to optimize apps for the new operating system, they must be built with the latest version of Xcode 5 which includes 64-bit support and access to new features like backgrounding APIs."
tsu doh nimh writes "Security experts have long opined that one way to make software more secure is to hold software makers liable for vulnerabilities in their products. This idea is often dismissed as unrealistic and one that would stifle innovation in an industry that has been a major driver of commercial growth and productivity over the years. But a new study released this week presents perhaps the clearest economic case yet for compelling companies to pay for information about security vulnerabilities in their products. Stefan Frei, director of research at NSS Labs, suggests compelling companies to purchase all available vulnerabilities at above black-market prices, arguing that even if vendors were required to pay $150,000 per bug, it would still come to less than two-tenths of one percent of these companies' annual revenue (PDF). To ensure that submitted bugs get addressed and not hijacked by regional interests, Frei also proposes building multi-tiered, multi-region vulnerability submission centers that would validate bugs and work with the vendor and researchers. The questions is, would this result in a reduction in cybercrime overall, or would it simply hamper innovation? As one person quoted in the article points out, a majority of data breaches that cost companies tens of millions of dollars have far more to do with other factors unrelated to software flaws, such as social engineering, weak and stolen credentials, and sloppy server configurations."
theodp writes "On the final day of Computer Science Education Week, the Hour of Code bravado continues. Around 12:30 a.m. Sunday (ET), Code.org was boasting that in just 6 days, students of its tutorials have "written" more than 10x the number of lines of code in Microsoft Windows. "Students of the Code.org tutorials have written 507,152,775 lines of code. Is this a lot? By comparison, the Microsoft Windows operating system has roughly 50 million lines of code." Code.org adds, "In total, 15,481,846 students have participated in the Hour of Code. Of this group, 6,872,757 of them used the tutorials by Code.org, and within the Code.org tutorial, they've written 507,152,775 lines of code." On YouTube, however, a playlist of the Code.org tutorial videos has distinctly lower numbers, with only 2,246 views of the Code.org Wrap Up video reported as of this writing. So, any thoughts on why the big disconnect, and how close the stats might reflect reality? Code.org does explain that an 'Hour of Code' is not necessarily an 'hour of code' ("Not everybody finishes an Hour of Code tutorial. Some students spend one hour. Some spend 10 minutes. Some spend days. Instead of counting how many students 'finish one hour'; or how much time they spent, this [LOC] is our simplest measure of progress"). So, with millions being spent on efforts to get Code.org into the nation's schools — New York and Chicago have already committed their 1.5 million K-12 students — is it important to get a better understanding of what the Hour of Code usage stats actually represent — and what their limitations might be — and not just accept as gospel reports like AllThingsD's 15 Million Students Learned to Program This Week, Thanks to Hour of Code ("every other school family in the U.S. has a child that has done the Hour of Code")?"