We serve our clients as integrated, strategic business partners by planning, designing, and building innovative software solutions that help businesses run more efficiently and realize their full potential.

Friday, December 27, 2013

“An ounce of prevention….” Why Network Security Should Matter to Small to Mid-Sized Businesses

Before assuming my role as managing partner at eTag Technologies, I worked for many years as a Senior technology consultant from small to large enterprise businesses. One commonality I observed while working alongside organizational leaders is that many of them lacked a team or individual whose primary focus was to analyze and detect network breaches. 

It is easy to understand why these companies would have a challenge investing in a position such as this, but when confidential client information or their intellectual property is stolen, no amount of money will be able to repair the damage your business has caused to your client and its reputation. The senior members who were selected to serve as part of the information technology governing body are sadly demoted or asked to resign. This affects everyone and it is only a matter of time before you and your organization realize how vulnerable you may be.

Business related cyber-crimes are on the rise and every organization should strongly consider investing in securing their infrastructure and building a team to protect it. If no one in your organization is responsible for network intelligence forensics, there’s a good chance you’ll suffer a breach in the near future (assuming you haven’t already).

Some of you may ask, can we truly prevent intrusions? The short answer: No. If someone wants to get in, they will. Most break-ins are not through a direct assault on your firewall. Most breaches occur easily. For one example all someone in your organization needs to do is accidentally open a “phishing” e-mail. Once the unsuspecting user clicks on the link in the email, undetectable malware launches that compromises the computer and steals the username and password without even the user knowing it. The intruder will now dig and search for valuable information. If the computer is connected to a domain, most likely the intruder will try to use those same credentials to compromise files, data, and servers.

You are probably asking yourself: “Why am I spending all this money on hardware and software if the intruder can still gain access?” Don’t forget that by having preventive assets in place, you make it harder for the intruder to comprise your systems. Instead of seconds, it may take the intruder days, weeks or even months to gain access to internal resources. So, where does a network security analyst comes into play?  Consider them the last line of defense. Prevention eventually fails. Breaches are inevitable. You need someone to constantly Plan-Resist-Detect-Respond.

Timing is the key factor for your security team, as intruders rarely execute their entire mission within minutes. There usually is a window of opportunity from the initial unauthorized access to detect, respond to, and contain intruders before they can finish the job. They might gain access, but you can eliminate them before they get the data they want. Intruders can and will compromise your systems but, your business can win if you have the network security assets in place that can detect and respond to intrusions.

Now more than ever businesses need to plan on protecting their confidential client information and intellectual property. Hackers have declared an all-out war on every machine connected to the web; don’t make it easy for them. Software and hardware prevention mechanisms can help, but a network security analyst can frustrate, resist and even fend-off intruders before they wreak havoc on your business and your client.

I would like to thank Kevin Mandia, CEO of Mandiant, for inspiring and helping me understand the value of network security in all types of business.

Remember: THINK…DESIGN…BUILD


By: Alex Martinez, Entrepreneur\Chief Transformation Officer at eTag Technologies

Thursday, December 26, 2013

Software development: It’s not the code you write, but how you write the code that makes all the difference…


When it comes to programming, there typically is no one “right” way to write code. Every developer, depending on his or her education, background, experience etc. writes code differently. In the programming realm (just like in real life), there are many ways to say or write the same thing; some are more efficient, while others are more intuitive and easier to understand.

Lots applications are written using code extracted from text books. Fortunately or unfortunately (depending on who you are--human or machine); text books are written primarily so people can easily follow and grasp concepts-they are not typically written with performance in mind. Very little thought, if any, is given to alternative methods of writing code and how to make it more efficient.

For example, while it seems that current browsers continue to optimize their JavaScript engines for basic operations, how the code is written can directly impact the performance, capability, scalability and end-user experience of any application. Let's analyze a simple JavaScript loop. The purpose is to iterate through a simple array or list of items using different approaches. Notice the differences in the performance of each browser:

IE 11

Firefox 25.0.1


Google Chrome 31.0.1


As you can see, each loop implementation serves exactly the same purpose. However, there are quantifiable variations in performance.  All browsers have strengths and weaknesses. The browser must be taken under consideration when writing code. For example, while Chrome may be optimized to perform loop operations more efficiently as compared to IE; it does not perform as well when invoking functions. This explains why Chrome, in this specific example, is the slowest one when using JQuery approach.

A few other things to consider…

It's important to consider the environment (Operating system, Hardware, etc.) where your code will be executed. Equally important is to benchmark the environment where the code will be running. This will allow you to develop adaptive code that is optimized regardless of the environment.

What can I do with this information? That depends on you. You can try to convince your company into switching to a different browser. Cross your fingers and quietly hope that your application is not using code that runs slowly or that makes the application unusable.

Or…..

A better idea might be to set the goal of writing smarter, flexible and adaptive code that takes into account all of the different techniques and environments that are out there. It’s your call. So, whether you are a developer or hiring a developer, it always pays to THINK…DESIGN…BUILD before you start any project. A little R&D like this can go a long way to ensuring the overall success of your project.

We would like to thank jfPerf (http://jsperf.com/) for the use of their tool. jsPerf aims to provide an easy way to create and share test cases, comparing the performance of different JavaScript snippets by running benchmarks.

Here is the link to our test:



Thursday, December 19, 2013

Understanding Software Testing: Why testing is so important


Understanding Software Testing: Why is testing so important?



Thanks to the “challenges” of launching the Heathcare.gov website, there has been a lot of discussion recently revolving around the importance of testing. In this blog post, we will try to provide you with a very high level over view of the 4 major levels of testing: Unit, Integration, System and Acceptance Testing.

DISCLAIMER: Before we get started, we have a brief disclaimer for all of you testers and programmers out there….we realize that there are many different types of tests that are used regularly. We are not claiming that these are the only tests out there or that this is an exhaustive list. We are simply trying to give the less technical people in our audience an introduction to testing.
Testing should always play a major role in the development lifecycle. Investing a little extra time and money during the development cycle will pay dividends in the long run--the assumption being untested code is code that is going to fail. 

The goal of testing is to identify any defects or weaknesses and to ensure the software meets all of the user requirements before it is released. To elevate testing practices to a higher level, it is crucial to first understand the stages and various methods of testing. As we mentioned, there are 4 main stages of software testing: unit testing, integration testing, system testing, and finally user acceptance testing.
Unit Testing

This first stage is unit testing. Unit testing focuses on validating the functionality of internal structures and methods. It involves isolating and testing small portions of source code through developer written unit tests. Unit testing is often performed by developers early on during the development stage. This allows for the early discovery of defects and is a huge advantage as defects are the least costly to correct when they are caught early on.
There are several aspects of a unit test that maximize its effectiveness. One important characteristic of an effective unit test is that it should only test a small portion of code. This greatly reduces the difficulty in correcting a defect by minimizing the amount of code a developer needs to search through to find the problem. Unit tests should be kept separate from the code being tested. This allows for the application to be deployed without having to run its unit tests each time. Also, a unit test should be completely independent of other unit tests. They should be executable in any order. Finally, a unit test suite should test both the expected behavior and exception handling for each method. This will often require each method to have multiple unit tests.

Integration Testing
The second stage of software testing is integration testing. Integration testing involves combining individual units that have already passed unit testing and testing their functionality as a group. Performing this testing after successful unit testing will reveal errors relating to the interfaces between units. There are four main strategies when approaching integration testing: top-down testing, bottom-up testing, big-bang testing, and sandwich testing.

The Top-down approach requires that the highest level modules be tested first. Lower level modules are progressively tested after the high level modules. There are several advantages and disadvantages to this method. One advantage is that testing the highest level modules first allows for major flaws in design to be caught early. The need for drivers is minimized and a demonstration of an early prototype is possible. However, there are of course some disadvantages. This method will rely heavily on stubs which can complicate the testing process and introduce errors. Stubs are dummy modules that simulate low level modules. This is needed in top-down integration since we are testing the high level modules separately before the low level modules.
The Bottom-up approach integrates and tests the lowest-level units first. High level modules are tested later in the process. An advantage here is that the need for stubs is minimized. It will instead rely heavily on drivers. Drivers are dummy modules that simulate high level modules by calling the low level methods we want to test. Since the high level modules are tested last in bottom-up integration, drivers are needed to act as their temporary replacements so we can test the low level methods first. A disadvantage with this method is that the high level logic will be tested late. Testing the high level logic late prevents the possibility of releasing an early prototype with limited functionality.

In Big Bang Integration Testing, all of the modules are integrated immediately and the system is tested as a whole. An advantage to this is that stubs and drivers are rarely required. This method is generally only recommended for small systems. Otherwise its disadvantages often outweigh its advantages. Big bang integration requires that all of the modules are ready before any testing can be done. Debugging and fault localization can become very difficult. It is usually much easier with top-down or bottom-up integration.

Sandwich integration combines top-down and bottom-up testing. The system is divided into 3 layers: top, target, and bottom. The target layer is identified first. It is usually somewhere in the middle. The components above the target layer are tested by the top-down strategy. The components below the target layer are tested by the bottom-up strategy. Testing converges at the target layer. A big advantage here is that both top-down and bottom-up testing can occur simultaneously. The need for drivers and stubs is reduced but both will still be required.
System Testing
Once integrated, the next step is system testing. System testing is the testing of a fully integrated system to verify its functionality and ensure that the user requirements have been met. System testing is a form of black-box testing. Black-box testing is any testing that does not require internal knowledge of code. Instead it verifies the functionality of the system by its specified requirements. There are many different types of testing that should be included in system testing. Some critical tests includes: load, security and regression testing.

The term load testing can be used interchangeably with performance and stress testing. The main objective in this type of system testing is to confirm that the system can operate efficiently under both normal and stressed load conditions. The system is purposely put under high stress to determine the maximum capacity the system can handle and identify any bottlenecks in the system.
Security testing is performed to ensure that the system is able to protect against any attempts of unauthorized access. Proper security testing should verify the following aspects are secure: authentication, authorization, confidentiality, integrity and non-repudiation. Penetration tests are often utilized in this testing. This test simulates a potential attack a hacker might execute and evaluates the system’s response to the attack.

Regression testing involves retesting the system after a change has been made to the software. This is to ensure that the change has not introduced any new defects to other parts of the system. It generally does not require new test cases to be written. It is often performed by rerunning previous test cases from earlier testing. In particular, it can often involve rerunning unit tests.
User Acceptance Testing

The final stage is user acceptance testing. User acceptance testing tests the system for acceptability. Its purpose is to verify that the system meets the business requirements by testing the system in its “real world” environment. This is done by involving end-users and real data in the testing process.
The testing is done by the system’s intended users or business representatives. There are often misunderstandings or miscommunications between developers and clients during the requirements gathering process. Inviting clients into the testing process provides an excellent opportunity to identify and resolve any such misunderstandings. The data used is often real data supplied by the client rather than the developer written test data. This is an important factor as it can reveal instances in which the system fails because the real data exercises the system differently than the test data.

Once user acceptance testing is complete, the system is ready to be deployed. If all of these testing practices mentioned have been enforced diligently, the final product will have undergone necessary and extensive improvements that will certainly satisfy the client.

Now that we have explained the four basic stages of testing, it must be noted that all of the testing in the world cannot fix a bad design. Proper care needs to be taken up front to create a solid project plan that meets the needs and business requirements of the client. Testing should always be a part of that design.
Remember: THINK…DESIGN…BUILD


Wednesday, October 30, 2013

Timing Is Everything: When You Should Publish Your Content




Nick Randazzo is a research-savvy intern at AWeber. He dug up some interesting stats for us, and here they are!
Whether or not you’ve considered when you should publish blog posts or post something on social media, time of day can really matter to get the best results.
People see your content in different places based on what time of day it is. To get the most views on your content, you should pay attention to the time of day that you post it. If you’re not too sure when to post, don’t worry- we – with graphs from Hubspot’s Dan Zarella – are about to break it all down for you.

Morning: Blog Posts, Then Facebook

Most people view articles and blogs during the morning. This is when people catch up on the news and search for the day’s ideas. This graph visualizes this.

Because blog views peak in the morning, take advantage of publishing your posts early. The earlier you publish, the more people will read your post.
Facebook posts get the most shares between 8:30 and 10:00 am.

You should make your most important posts in the mid-morning. They’ll get more shares this way.
OK, so the most important information should get out via Facebook and blogs in the morning, but what about the rest of the day?

Afternoon And Evening: Twitter

Twitter has the most activity in the later afternoon.

The important line to notice is the “ReTweet” curve. Random Tweets are the content you put out originally, but ReTweets are shared beyond your current followers, so they introduce new people to your content.
The rate of ReTweets increases as the afternoon goes on, so if you want to advertise your content through twitter efficiently, late afternoon is the best time to do it.
A recap: Most people check Facebook on waking and then turn to Twitter later in the day.

What About Blog Posts And Facebook Later In The Day?

The graph above shows that a respectable number of people still read blogs all throughout the day.
While it is still most effective to get them out early, sending them out later is not the end of the world.
Facebook sharing spikes again in the evening.
You can best use Twitter to close the gap between Facebook’s morning peak and evening spike.

How Do Emails Fit In?

Something like an email is different.
There is no universal best time to send emails. It changes based on what you’re advertising and who your audience is.
If you’ve already created a consistent experience with your readers (such as sending Tuesdays at lunchtime or Sunday evenings), it’s likely they’re comfortable with the pattern and might not want it to vary.
Otherwise, the only way to find out the best time to send to your subscribers is to test!
Send out your emails at different times and see how your results vary: a lot of people open their emails in the morning, some open in the afternoon, and many others don’t even have the option to view their emails during the day.
You have to just keep on experimenting until you find the best time for you!

By: Re-posted by J. Heath Shatouhy, Senior Vice President / Partner eTag Technologies