Data Replay Services

Computer Crime / Digital Discovery / CCTV Image Recovery / Computer Forensics

Archive for the ‘Storage News’ Category

Updating Security Patches

without comments

windows update logoMaintaining the security patches on Windows desktops and servers is vital to prevent any unpatched operating system or application software vulnerabilities from being exploited by a virus, worm, or Trojan horse. One of the most common reasons that Windows machines are compromised is that vulnerabilities have not been patched, even though the fix was made available weeks or months prior.

If your system contains vulnerabilities that have not been patched, your system can be exploited and files will be compromised and saved to the hard disk. There is very little protection that can be applied to the hard drive itself and once code is written to the drive the hard disk is infected.

In a corporate infrastructure, it is wise to test patches before pushing them out to your production servers and workstations. If something goes wrong, you’ll detect it on your test machines and you can contact Microsoft and report the issue. Chances are that the problem is already known and a solution is right around the corner.

Example:
Updating Security Patches on a Windows 2000 Server
1. Click Start > Windows Update (Note: If you are using Windows XP, Windows Update wont be immediately available in the menu list. You can open Internet Explorer, click Tools, and then click Windows Update. The default location for Windows Update is Start > All Programs > Windows Update (located near the top of the menu).
2. When Windows Update opens, if you computer has not had the latest version of Windows Update installed, you will see a Security Warning box appear. Click Yes
3. When the offer appears to download the new version of the software, click Install Now
4. When offered, click the Express button
5. After it installs, Windows Update will scan your computer and search for the latest updates. When they are presented, click Install Updates
6. The Windows Updater will ask you to verify that you want to download and install the updates. Click Install Updates (you may be required to accept the EULA too)
7. When all the updates have been installed click Restart Now when prompted.
8. Once the system has rebooted, log on to the server.

Problems?If the update process did not complete successfully your computer will be in a vulnerable position. Incomplete updates are usually caused by some sort of error on the computer’s hard drive, and we recommend you seek professional assistance.

Written by Betty

February 7th, 2016 at 10:36 am

Posted in Storage News

Western Digital’s 8,10 and 12 GB Hard Drives

without comments

Western Digital have introduced three new hard drives that push the boundaries of storage capacity still further (http://www.kitguru.net/components/hard-drives/anton-shilov/wds-hgst-introduces-new-6tb-8tb-and-10tb-hard-disk-drives/). Their new Ultrastar offerings bring huge storage potential to buyers and their 10GB and 12GB are designed for ‘cold data storage’. That’s a new phrase which basically means the storage of data that has to be kept but not accessed regularly – eg. records for tax purposes and archives etc.

While these hard drives offer huge storage capacities they are not optimised in such a way that their data access times are fast, instead efforts have gone into the amount of storage space rather than the speed of data transfer.

 

Written by Betty

December 17th, 2014 at 1:47 pm

Posted in Storage News

What is 3D Printing ?

without comments

Many processes can these days be modelled or reproduced by computer. One of the most exciting of all is 3D Printing. The term ’3D Printing’ is a little misleading really, because when you mention printing and computers people always think of computer printers. They think a 3D printer connects to their computers via a USB cable and a power socket. Additive Modelling is a more correct and appropriate term for this process.

A 3D printerThe first part of printing in 3D is to accurately produce a digital representation or model of what you want to create, or print. This is usually achieved by photographing the item from sufficient angles that once the information is digitised, the computer is able to extrapolate the dimensions, detail and shape of the object from any angle. This process is typically best computer controlled, so that exact measurements and details are gathered enabling a 100% digital model to be created.

Although additive modelling has been around for several years, it had only really been used commercially. Midway through the 2000′s open source software projects particularly one called RepRap were developed that bought additive printing out from its entirely commercial domain and into the home hobbyist marketplace. These days, pre designed item templates can be downloaded from the internet that allow users to ‘print’ whatever the template is, for example a funnel or cup etc.

Commercial additive modelling has developed rapidly in the last few years and these days many automotive parts and developments are made using it. Additive modelling printers that work in metal have also been developed. These days many ideas are prototyped as projects using additive modelling.

Links:

3D Printing / Additive Modelling on Wikipedia
CandyFab – Additive modelling with using sugar

Written by Betty

September 8th, 2014 at 3:02 pm

Posted in Storage News

Tagged with ,

When To Use Data Recovery

without comments

Even though you can find a number of software programs available out there for data recovery, it is actually not in your best interests to try and recover data from a drive by yourself. If you have a straight forward problem then data recovery software will help. If the problem is something minor to do with a faulty hard disk drive that is giving to problems you can probably get the information you need to recover the data and repair the hard drive yourself from a site like http://data-recovery-tips.co.uk. Be careful though because if you have a serious hard disk fault, you can and most probably will destroy crucial data by attempting to do this all on your own. At times like these,  you should hire an expert to do this for you. It could be very expensive, but as I have said time and again, if you must regain data from your drive, then the cost is worth every penny.

So what might be learnt out of this scenario? Prevention is better than cure! After successful (or unsuccessful) data recovery attempts, you may be asking yourself what you might have carried out to stop the difficulty in the very first place. There’s simply no way round the tiresome task of backing up all of your info! Backing up is something that many folks leave until their hard drives crash and they will have a brush with hard drive recovery. Envision how much time and cash you’ll have saved if you’d backed everything up?!

Written by Betty

June 19th, 2014 at 10:03 am

SQL Injection Attacks

without comments

A colleague of mine has received a notice from Google via his Web Master Tools account telling him that is site has been hacked:

“Unfortunately, it appears some pages on your site may infect visitors with software designed to access confidential information or harm their computers. You may not be able to easily see these problems if the hacker has configured your server to only show malicious content to certain visitors. To protect visitors to your site from malware, Google’s search results now display a warning when users click a link to your site.”

He’s yet to discover how this was accomplished but my money is on an SQL injection via the sites online database. SQL injections can be used to upload code to a web site, often a PHP file which can then be run remotely by the hacker and take control of the host web site. Taking control usually means redirecting the web traffic off to somewhere else (like a malware / trojan site).

Captcha exampleGoogle monitors web sites and attempts to detect when they have been compromised. This is how my colleague found out. SQL injection attacks are common way hackers use to take control of web sites but they are not the only way. Another common way is to gain access to the web site by cracking a username and password. Many sites these days have an extra page at login that introduces a Captcha into the login sequence – a code that is difficult for machines to read and easy for humans. You can read more about Captcha here: http://www.captcha.net/

Of course, this method is not 100% secure. While it’s often easy for people to identify the words or number sequence in the captcha boxes, it’s not an impossible task for a machine and there are many types of captcha software available that are able to crack many of the captcha codes. Also it’s possible to get captchas read and cracked by humans. Often in poorer countries – there are rooms of computers with people sat at them whose job is to type what they see. This is then relayed back to the captcha program and the captcha is broken and the system accessed.

Written by Betty

May 23rd, 2014 at 8:40 am

Storage Area Networks

without comments

Webfusion logoStorage Area Networks (otherwise known as SANs) – What are they exactly? – It’s a question that’s trickier to answer than you may think. My job has taken me off researching again – this time into those most noisy of places – server rooms. The one I went to was vast and belonged to an ISP – it’s staggering how much data these places handle and they told me they have other 1000 servers – a mixture of Windows and Linux operating systems, each with at least 5 disks in the RAID array. A quick calculation means this place had in excess of 5000 hard disk drives – all spinning and holding data.

A Storage Area Network (SAN) is a type of computer architecture where distant storage devices like disk arrays are attached to servers and set up in this manner they seem to be locally on the system. By sharing storage it can simplify things and will add flexibility as the storage doesn’t have to be physically moved around, a SAN environment may work with a variety of hardware from RAID’s to Network Attached Storage, as a result of the numerous components that may compose a Storage Area Network it could be rather sophisticated, the more sophisticated the system the bigger the odds of a failure in a single element, that consequently can impact the whole storage system.

By investing in a Storage Area Network it may supply adaptive storage direction and high performance, with a Storage Area Network enables one to allocate space to specific apparatuses or to hold the whole storage open or you may select who can obtain specific data. Storage Area Network systems seldom fail and when they do it is typically due to user error, typically within picture direction or an error with san direction characteristics, Storage Area Networks support large quantities of users, this makes it useful for companies who have a great deal of data but need it centralized in one accessible location. Because of the large quantities of users as well as the feature of the data that’s saved on Storage Area Networks they want continuous tracking and data recovery accessible twenty four hours a day.

Servers can be booted from your Storage Area Network this supplies advantages such as the fact if your server is flawed the Storage Area Network can be reconfigured to make use of the valid Component Quantity of the flawed server on a different server, Storage Area Networks may also be found in catastrophe protection, the reason being they are able to be crossed over distant places with an IP Wide Area Network, this empowers storage replication executed via disk array controllers, server applications or specialized Storage Area Network Apparatuses.

There’s a solution to gauge the operation and capacity of a Storage Area Network this is via quality of service, quality of service allows for the required storage operation to be preserved and computed for network users. The chief variables which influence Storage Area Network quality or service are; Bandwidth, Latency and Queue Depth. Bandwidth is the speed of the data throughput which can be found, latency is the time delay for a read/write function to happen and Queue depth is how many operations which are waiting to be ran to the discs. The essence of service can be influenced by matters like a spike in the level of data traffic from one user this subsequently affects the other users when quality of service services have been in area utilization and operation, causing their operation to reduce could be preserved and called.

When employing a Storage Area Network it’s important never to use over provisioning, this is where additional disk space is put into compensate for peak traffic loads yet where peak traffic loads can’t be called the additional capacity may cause bandwidth to be consumed entirely and latency to rise over time it is also called Storage Area Network degradation.

Link: This article is covered in more depth on the Data Recovery Reviews web site.

Written by Betty

January 29th, 2014 at 4:53 pm

Broken Hard Drives Still Contain Recoverable Data !

without comments

When you store important information on a hard disk drive, you take the chance of losing that information because hard disk drives can break for a variety of different reasons. If you’re not computer literate, losing data on a hard drive can be difficult because you don’t know what to do to recover your data. Fortunately, there are companies that can help you get that data back. These are known as data recovery companies and they can retrieve data from nearly any type of hard drive in any type of condition. So when you find yourself in a situation where you have lost valuable information you should not assume that this data can not be retrieved.

Hard drive with cover removed.There are some companies that specialise in repairing damaged hard drives. In the process of retrieving your data, theses data recovery companies can also repair your hard disk. Types of the problems data recovery companies can repair include broken boards, mechanical hard drive faults such as the ‘click of death’ and beeping hard drives. A recommended company in the UK are RAID and Server Data Recovery – you would use a hard drive recovery company when your local IT provider can no longer help you.

Sometimes when there is no damage to the hard disk, software programs can be used to help you retrieve your data by yourself. There are usually trial versions available on the internet that will tell you what they can and can’t recover first. If you decide you want to recover the data they find you’ll have to buy the program which usually costs several pounds. But at least it will get your data back for you. A word of caution though, what may at first appear to be a simple hard drive problem may be something far more serious and trying to recover the data yourself may cause many more problems. So if in doubt, you should always seek professional help as there could be problems inside your disk that prevent it from working properly.

Remember that just because your hard disk breaks or the information on it seems to have disappeared, this doesn’t mean that the data is lost forever. Skilled data retrieval companies that can find and rescue lost data on a damaged hard drive. In the process of restoring the data, the hard drive problems that caused the data to go missing in the first place will be fixed. This is helpful because it insures that it won’t happen again.

Developments in Data Centre Hard Drive TCO

without comments

Total Cost of Ownership Graph

The main TCO factors for data centre design

Since social media, data warehousing, online retailing and banking businesses create the most demanding cloud support infrastructures with thousands and thousands of hosts managing petabytes of data, they all understand the significance of crafting an end-to-end storage strategy which appears beyond one time capital cost for the crux of the actual prices – the operating costs related to Total Cost of Ownership (TCO). Cloud architects are creating new guidelines that illustrate the bottom-line value of having a systems wide approach to storage. Usually based on components and software, they deploy drives optimized for special applications that demand efficient power utilization, density and dependability. The important thing: Cloud data centre choices are actually based on value instead of pricing and TCO is how value has been measured.

Embracing the best storage scheme for public, private, and on premise data center infrastructures could make a huge difference in your capacity to significantly lower TCO costs. The key to successfully affecting TCO goes well beyond the price of the drives themselves or measuring effectiveness when it comes to cost per gigabyte only.

Understand the dependability of the hard drives in the data centre. The more dependable the drive, the less time plus price spent keeping it. Drives rated at HGST’s business’s leading standard of 2 million hours Mean-time Between Failure (MTBF) will encounter 40% fewer failures during the five-year life of the drive over these rated at 1.2 million hours.

Raise the density of the present data center footprint with higher-capacity drives. In addition, new up-and-coming helium filled platforms are capable of supporting seven platters per normal 3.5inch HDD, 2 more platters or disks compared to present airfilled five-platter drives.

Power Utilization Efficiency or PUE denotes the proportion of the entire number of power used by a data centre to the power sent or consumed by its own gear. PUE makes it possible to quantify such variables such cooling, electricity distribution and light. The perfect PUE ratio is 1.

Best-in-group data centres like Google and eBay have reported ratios only 1.14 and 1.35, respectively, however an average data centre has a ratio of 2.5. This implies that a typical data centre uses 2-and-a-half times more electricity compared to the amount required to operate the equipment.

Written by Betty

December 3rd, 2013 at 12:33 pm

Twitter Takes Steps to Protect Tweets from Snooping

without comments

Twitter logoTwitter has implemented new security measures that will make it much harder for anybody to eavesdrop on communications between its users and servers, which is calling on other Internet companies to follow its lead.

The firm has applied “perfect forward secrecy” on its Internet and mobile platforms, it said Fri. The technologies should allow it to be impossible for a business to eavesdrop on visitors nowadays and decrypt it at some time in the foreseeable future.

Facebook did not provide a reason behind the switch, however, it did connect to a blog post from the Electronic Frontier Foundation that implied the strategy be used as a means to halt the National Security Agency (NSA) or a different party from spying on Internet communications.

Obviously, much of whatever is sent over Twitter is destined to become public anyway, however, the service does help a direct message system between two clients that is concealed from public view.

In a blog post introducing the new safety, the company said it believes it “should be the newest normal for web service owners.”

At present, the encryption between a user and the server is based around a solution key used on the server. The data exchange is unable to be read, but nevertheless, it could be noted in its encrypted form. On account of the way the encryption works, it is feasible to decrypt the data at some time in the future if the server’s secret key actually be received.

With perfect forward secrecy, the data security is based on two shortlived secrets that can’t be later restored even with the understanding of the server key, so the data remains safe.

Because while security traffic is difficult to break with present computer technology, inventions in processing hardware and methods might make it less difficult to break in the future, it is an important principle.

It’s important to notice that as the technology safeguards against eavesdropping, it will not affect the ability of law enforcement agencies to get info from Twitter through conventional legal channels.

Written by Betty

November 26th, 2013 at 2:22 pm

Posted in Storage News

Tagged with , , ,

Rebuilding Broken RAID 5 Servers

without comments

RAID 5 RebuildWith the increase in volumes of digital data throughout the world the number of servers in existence has risen spectacularly in order to cope with demand. Only a few years ago, server and RAID’s were rather uncommon and only found in larger businesses with hundreds of employees. These days RAIDs are often found in NAS home server systems, frequently as RAID 0, RAID 1 or RAID 5 systems. Similarly, RAID and server data recovery services can no longer be seen as activities that happen on an off chance.

A RAID gives you the ability to combine a number of hard disks into one big hard drive. For example combining 10 2TB hard drives will give you a storage capacity of close to 20TB. That’s enough storage space for a home user to store lots of HD films, photos and music. A system this size will typically support a small company as well but of course it depends on the type of data they are storing. Microsoft Office documents tend to be rather small when compared to other types of digital media such a photographs, music and films.

There’s a huge increase in the demand for storage space and files are also very large – any typical HD movie you may download is often several GB in size and home movies you make yourself are also large. It’s the same with music files – a respectable iTunes library can get many gigabytes in size. With music and movies a lot of people choose the stream the content from a local ISP. This saves on using up your valuable hard disk space but is reliant on a decent and quick broadband connection, else you will see the spinning wheel so often noticed when streaming content over the BBC iPlayer service that is telling you that your internet connection just can’t keep up the necessary download speed so that you can watch your programme uninterrupted.

Servers are used to combine several hard disks into a large storage volume or array. Many servers use a configuration that is known as RAID and there are many types of RAID available. Each type has advantages and drawbacks over others. For example the fastest type of RAID is a RAID 0. Data transfer speeds on RAID 0 are extremely quick so it’s often used by video producers and TV producers because their files are large and a fast data throughput is needed when recording, editing and during playback. The downside of RAID 0 is that is has no data redundancy. Data redundancy is a posh way of saying that it only takes a failure of one of the hard disks that make up the RAID 0 and all your data becomes inaccessible. If this happens to you then you will need to contact a RAID data recovery service who should be able to repair the fault on the hard drive and RAID and be able to restore your files to good health.

Another common type of RAID is a RAID 5. This provides good data transfer speeds and has some built in data redundancy too. The data held on a RAID 5 is stored across the hard disks that make up the RAID volume, if one of those hard drives fail the RAID will continue to work without any loss of data at all. If the RAID 5 does loose one of the hard drives due to a fault it’s important that the broken hard drive is swapped out and a working one installed in its place. This is a process known as rebuilding and many RAIDs are clever enough to allow this to happen without having to take the server offline or even power it down. The faulty hard disk is just swapped out and a new one put in – the rebuild is automatic and completed often in a matter of minutes, although the time it takes to rebuild a server does depend on the amount of data it holds. The more data there is stored on it, usually the greater number of hard drives that constitute the RAID set and the longer it takes to rebuild the server.

Rebuilding is usually the method of choice for getting a RAID 5 with a failed drive back up to optimum working levels, but RAID rebuilding can cause a lot of problems. Sometimes the integrity of the data on the RAID server is not rebuilt as it should be ie. it original data contains several corruptions. When a server attempts to rebuild a system using corrupted data the problem is compounded and becomes significantly worse. Basically a problem that existed in one place will now exist in many others, and this happens for every problem that is found on the server during the rebuild process. A server rebuild that has gone wrong is often a worse case scenario for data recovery companies because the rebuild makes restoring the data very difficult and sometimes impossible. Many professional data recovery specialists will advise that server rebuilds are not performed without first checking the integrity of the hard drives in the RAID array and also the integrity of the hard drives themselves.

Written by Betty

November 14th, 2013 at 9:25 am