There is an interesting question for anybody in the software business: “Who has a copy of your software?” It has meaning for people doing open source projects, because there the sharing and distribution of code is part of that deal. However, what if you are a proprietary software vendor or work for a company that produces internal or external business software? It means something completely different.
There are more questions: if you outsource or near source, do you have controls, can you have controls on the ability to copy your proprietary solutions in the form of un-compiled source code? If those questions don’t have a simple answer, what about code that you have let someone update, review, enhance or do projects on? Do you have true controls in place in case someone has stolen, put in a backdoor, or has compromised the software in subtle ways that may go undetected for years or even forever?
Do you really want unauthorized copies and outside people skimming your investment, hard work, blood, sweat and tears? Now, before you start going crazy, yes I understand that software is mostly copied from one person to another, and there are enterprise platforms, APIs, libraries, and algorithms that we use that came from somewhere. However, my biggest concern is intentional nefarious deeds.
You know, 2014 was not a great year for security issues and, according to McAfee Labs, during the third quarter of 2014 there were more than 307 new threats per minute or more than 5.11 new threats every second. I won’t even get into the increase in malware threats; and it’s very hard to nail down just what is malware today, as it keeps morphing just like viruses do.
The question every software vendor (internal business and external supplier) should ask is: Do we have the process (tools, human, machine) / procedures in place to understand what is being built or produced, and whether it’s clean?
Before we get into more stats around some of these situations, I had a conversation with a CIO a few years back and we talked about how his company had outsourced its billing application to an offshore company, which had the lowest bid. This billing system was responsible for no less than part of the U.S. electrical grid. I asked if they had scanned the software they got back from the vendor; his answer was no. I asked if the product produced had access to the grid; his answer was yes. How do you know if there are any backdoors in your code? His response: “We don’t!” This is but one small example.
It will only take one attack before everybody in the industry will be stating we should have done better. Keep in mind that having the code allows people to investigate, interrogate and deep dive in to the construction of the code to reveal security exploits that may be part of the product you create or be able to add a patch to open a backdoor. If work is being done on the software, backdoors and exploits are easy to add; and if there is not a checking process in place, the code is wide open for exploitation.
As an example, take a generic application that processes insurance policies. What harm could come from someone getting that code? Well, what if there was an exploit that allowed someone access to customer profiles, which would include all of the information needed for identity theft at a minimum. In 2013 the United States spent $24 billion in damages due to identify theft.
In addition, if they were crafty they could reroute policy values to another bank offshore and leave without a trace. Oh, that’s right … we have audit trails that will give all of the information we need to capture unauthorized access when they are using simple IP address spoofing … NOT. It may be rudimentary, but IP spoofing works and how to do it is well documented.
The point is that we do not have sufficient controls on the source code we create and maintain, which runs our companies today. And we don’t use or have access to adequate tools to scan, review and highlight possible exploits on un-compiled source code.
We do not, and have not, invested in the security infrastructure to reduce exposure. We do not train our employees to be helicopter parents around their kid when the kid is the source code. IDC stated that the average enterprise has around 2,600 applications that are homegrown and off-the-shelf software that they use to run their organization. How do you think they stack up?
I bet if I asked if you know what PaaS, IaaS, and SaaS are, you would be all over those acronyms. But do you know what CaaS is? It stands for “Cybercrime as a Service,” which is a community that breaks into U.S. healthcare systems (public and private) and exploits stolen health credentials at $10 each, which are about 20X more valuable than a stolen U.S. credit card number. In fact, look at the news about Anthem: they did not have any health data stolen at this point, but over 80 million Social Security numbers and other identity information were compromised.
So far I have not even started to scratch the surface on all the scary security exploits out there. I’m focusing on who has your source code and what they are doing with it. I’m focusing on people looking for or adding exploits during the software construction phase. I’m not just talking about software flaws, which continue to be found every day, every minute, every second, as this will be a continuing battle for any software out in the wild.
For the record, in 2013 there were over 4,794 reported security exploits (known attack vectors) in the computer world. It is always thought that the OS is the issue. The latest stats from National Vulnerability Database (NVD) show that only about 10 percent of the exploits focus on the OS, while 4 percent focus on hardware and the remaining 86 percent exploit applications. But there are no stats on the possible exploits in source code! Why? Because most organizations don’t even realize there is a threat, and many don’t report “how” they were exploited.
How to know who has a copy of your software
The heavily exploited areas revolve around applications. Applications get created by software code. And now we should return back to the start of this article and its question: Do you know who has a copy of your software?
If the answer is no, then what can you do to get to an answer of yes? If the answer is yes, the follow-up questions begin with what kind of review process is in place for the source code. Does your source code review process include the following?
- Static analysis looking for exploits
- Good SCM (source code management system) in place
- A published build process and change procedure in place validating that the expected files and processes match
Here are five things to start your own list with some initial steps to enhance your current development process.
1. Create a source code access list for your software.
This list should include historical information, not just the people working today on the software. If you have had a good SCM product, it should create the list and even give a good summary on what modules were touched by what developer.
Are you using an outsourcing partner? As part of any outsourcing project/contract, each of the individuals on the team should have been through a background check. Does the outsourcing partner have specific security coding standards in place? It is recommended that a source code access list be part of an outsourcing contract and be given when the service provider completes the project.
Is the outsourcing partner doing the work on U.S. soil? Many state and federal governments have strict guidelines about where the development has to be completed. If this is a requirement, make sure the contract includes a clause that all sub-contracting work must take place on U.S. soil and all sub-contractors must have a current background check and be reported to you as the authorizing authority of the work being completed.
2. Scan new code using static analysis. And, at a minimum, are you checking bounds?
3. Diagram your software to see the relationships between various parts of your application. Many SCM tools have some of this capability that can be used. You want to determine if there are rouge libraries, programs, or APIs that you may not know. There are products on the market that can do this and also do dependency checking. You should review these diagrams and dependency charts every couple of months.
4. Categorize your source code by level of perceived vulnerability, as follows:
- Low-risk – a code module that does calculations and is a standard library that all your source code uses.
- Medium-risk – code that accesses log files or has external file access
- Higher-risk – code that communicates to the database
- High-risk – code calling or consuming Web services
- Extreme-risk – code that sends or receives data over the wire outside your firewall
Note: These initial assessment categories will help you break up the review process into smaller chunks of work. This way it does not kill any velocity that the team may have in its day-to-day operations. If you have software that can do this, that is great; but remember to always review and check off in the SCM system if a potential exploit is reported by human or machine for future reference and that it has been checked.
Caution: In the case of high-risk and extreme-risk modules, anytime more than 5 percent of the source code is changed, a human should review those changes. As stated earlier, your SCM tool should be able to tell how much code was modified.
5. Scan the built software. After the build of the software, it is recommended that you scan the applications using both anti-virus and anti-malware tools before putting the applications in to production or shipping the product.
Portability and incoming/outgoing connections
The sad fact is that even if you do all of the above and have access controls in place, the exposure of your code is still very likely. How hard is it to take a thumb drive into the place of work and copy a repository? It’s not, and most likely there will not be an audit trail to verify it anyway.
I once did a presentation to a government contractor and had to surrender my computer for them to do a scan when I entered the facility. When I left I surrendered my computer again and they scanned it to see if any files changed. However, today with the ease of copying from one drive to another (portable) it’s easier to get the code.
Thus, the only conclusion I can see is more analysis of the code before compile and after compile – deep, penetrating analysis to ensure all incoming/outgoing connections are to specifications. Next, review often, with human eyes, the extreme-risk code and, if possible, have the SCM tool notify the team if more than X percent is changed or added. Finally make sure that everybody on the team, in the division and in the company understands exactly what is at stake.
Make the investment now, before tragedy strikes and you’re saying, “Yeah, we should have done X!”
Mike Rozlog’s 20-year software and technology industry experience brought him to dBase as the CEO to build the next-generation business intelligence products and data management tools. Mike is a dynamic leader known for driving innovation, product development, market analysis and product evangelism efforts. He has hands-on technical experience across architecture, enterprise and commercial software development. He is widely published and quoted in industry publications and is a frequent speaker at industry conferences. Contact him at firstname.lastname@example.org.