WWW.BOOK.DISLIB.INFO
FREE ELECTRONIC LIBRARY - Books, dissertations, abstract
 
<< HOME
CONTACTS



Pages:   || 2 | 3 | 4 | 5 |   ...   | 7 |

«Rosie Wacha rwacha Storage Systems Research Center Baskin School of Engineering University of California, Santa Cruz Santa Cruz, CA 95064 ...»

-- [ Page 1 ] --

Data Reliability Techniques for Specialized

Storage Environments

Technical Report UCSC-SSRC-09-02

March 17, 2009

Rosie Wacha

rwacha@cs.ucsc.edu

Storage Systems Research Center

Baskin School of Engineering

University of California, Santa Cruz

Santa Cruz, CA 95064

http://www.ssrc.ucsc.edu/

UNIVERSITY OF CALIFORNIA

SANTA CRUZ

DATA RELIABILITY TECHNIQUES FOR SPECIALIZED STORAGE

ENVIRONMENTS

A project submitted in partial satisfaction of the requirements for the degree of

MASTERS OF SCIENCE

in

COMPUTER SCIENCE

by Rosie Wacha December 2008 The project of Rosie Wacha

is approved:

Professor Darrell D. E. Long, Chair Professor Ethan L. Miller Acknowledgments I would like to thank the following people for their help and support: Darrell Long, Ethan Miller, Thomas Schwarz, Scott Brandt, Gary Grider, James Nunez, John Bent, Ralph Becker-Szendy, Neerja Bhatnagar, Kevin Greenan, Bo Hong, Bo Adler, Alisa Neeman, Esteban Molina-Estolano, Valerie Aurora, Julya Wacha, Diane Wacha, and Noah Wacha.

I also want to thank the following organizations for funding my research: UC Regents, Graduate Assistance in Areas of National Need (GAANN), Los Alamos National Laboratory (LANL), and the Institute for Scalable Scientific Data Management (ISSDM).

ii Contents Acknowledgments ii List of Figures v List of Tables vi

Abstract

vii 1 Introduction 1 2 Synthetic Parallel Applications 3

2.1 Introduction.................................... 3

2.2 Related Work................................... 4

2.3 How to Create the SPA....................

–  –  –

Data reliability has been extensively studied and techniques such as RAID and erasure coding are commonly used in storage systems. Real workload data is also important for storage systems research. We developed a tool to streamline the process of releasing workload data by automatically removing all non-I/O activity from software. The tool creates a Synthetic Parallel Application (SPA) that has the same I/O behavior as the original program when it is run. Next, we address reliability in the context of two specific storage environments, namely sensor networks and tape archives.

Sensor networks are made up of individual nodes that are highly constrained in power.

Due to reduced storage costs, nodes are increasingly storage-based and transmitting data to a base station is reduced in order to conserve power and camouflage the network in hostile environments. We investigated the tradeoff between power and reliability for storage-based sensor networks using Reed-Solomon, XOR-based codes, and mirroring. Results show that our Reed-Solomon implementation provides higher reliability with more flexibility but with a higher energy cost. Also, the XOR2 reliability scheme we designed provides reliability close to that of 4-way mirroring at half the storage space overhead.

Commercial tape drives have high reliability ratings. However, many individual drives make up an entire archive. In order to achieve good write performance, data is often written in a striped pattern so that several tape drives are used to store a single file. Thus reliability is a significant concern and additional reliability techniques are often used. We investigated the performance overhead of row-diagonal parity (RDP) in the context of a large tape archive. Results show that our parallel implementation scales well for small numbers of nodes, with twice the initial write bandwidth of data when the stripe size (and number of nodes) is doubled. Future work will compare the performance of RDP with Reed-Solomon and evaluate scalability with higher numbers of nodes.

Reliability can be achieved in many ways. The SPA project can help improve storage reliability by allowing software that normally could only be tested in a single environment to be run on different hardware setups. Sensor nodes often have very limited power available due to the locations where they are often deployed. The reliability of data measured from one node is not always essential, particularly if another nearby node measured the same data. The choice of reliability technique for a sensor network must be made in the contex of these constraints.

The data stored in tape archives is often never read, but if it is needed it must be there. We can sacrafice some extra hardware as long as performance is not significantly lowered. This project investigates these three areas of reliability.

Chapter 1

Introduction

One of the central requirements for most file systems research is good workload data.

Most of the time this data is contained in a log of I/O requests, known as a trace. Collecting and releasing traces is not glamorous – file systems researchers typically only do this out of necessity. No one really wants to collect traces because it is a time consuming process and there are privacy concerns that must be addressed before the data can be released.





The first part of this project is a tool that simplifies the process of collecting traces of real parallel applications and releasing them to the public. The basic input to the tool is a parallel application that can be run on a cluster. The tool runs the application and collects traces at each node. Then these traces are automatically analyzed to detect all I/O behavior and a new program, called a Synthetic Parallel Application (SPA), is written that will perform the same I/O activities at the same times. All non-I/O behaviors in the trace are ignored and not present in the SPA. Our results show that I/O traces collected from running the SPA closely match the original traces.

The second part of this project is an investigation of reliability for two storage environments: sensor networks and tape archives. Good data reliability can be achieved by simply mirroring data on several disks. More copies of data provide more reliability. However, the hardware cost quickly grows unmanageable. Particularly in environments where traditional disks are not used or are only part of the storage system, more sophisticated reliability strategies are helpful.

Sensor nodes that store their data locally are increasingly being deployed in hostile and remote environments such as active volcanos and battlefields. Observations gathered in these environments are often irreplaceable, and must be protected from loss due to node failures. Nodes may fail individually due to power depletion or hardware/software problems, or they may suffer correlated failures from localized destructive events such as fire or rock fall.

While many file systems can guard against these events, they do not consider energy usage in their approach to redundancy. We examine tradeoffs between energy and reliability in three contexts: choice of redundancy technique, choice of redundancy nodes, and frequency of verifying correctness of remotely-stored data. By matching the choice of reliability techniques to the failure characteristics of sensor networks in hostile and inaccessible environments, we can build systems that use less energy while providing higher system reliability.

Tape drives were invented by IBM in the 1950s [11]. Tape archives are still used for data that is written once and then rarely read or updated. Fast write performance can be achieved by writing data in a striped pattern. A very large file is broken up into several chunks and each chunk is written to a separate tape device. For example, a 128GB file might be broken up into 128 chunks where each is written to a tape. The time to write that file would be the time to write 1GB. Striping like this is actually done on a much larger scale. The problem is that reliability for that large file is degraded significantly when only striping. If any one of those 128 tape drives is damaged, that file cannot be reconstructed. Reliability in the context of such high performance requirements is quite challenging. For example, suppose a 1 GB tape cartridge is expected to last 30 years [2], which is a mean time between failures a little above 105 hours. If the entire archive contains 4000 cartridges, we expect to see a failure every day.

In high performance computing the stripe width can be very large, meaning that a single file may be broken into thousands of pieces and each piece is stored on one devices. A single parity provides some protection, but with thousands of devices it is not sufficient. We implemented a software RAID that performs mirroring, RAID 4, and Row-Diagonal Parity (RDP). We measured the performance of RDP to determine how much processing time is required to compute two parities.

In summary, we investigated reliability from several points of view in some very specific contexts. The first is that of the I/O workload and how it can affect the choice of reliability method for a storage system. The SPA provides a method for running the I/O subset of proprietary or private code on untrusted hardware. This allows more applications to be used as benchmarks for new algorithms and can help improve data reliability. Sensor networks typically have very specific constraints. Some of these are limited power, cheap hardware that is more likely to fail, and deployment in hostile environments, each of which further increase the likelihood of node failure. The choice of reliability technique must address these constraints and provide reasonable reliability in creative ways. For example, storing a mirror copy of data from one node to another far away in the network can protect the data better than storing that copy at a nearby neighbor. Lastly, tape archives have unusual access patterns and requirements.

Individual hardware components are relatively reliable. In larger systems, components are often utilized in parallel to improve performance but resulting in a much lower overall system reliability. A large file can be quickly written to tape, but then that file requires all those tape drives to be functional in order to reconstruct that one file. The performance impact of adding erasure coding techniques is important and must be addressed to ensure that performance isn’t degraded to near what it was without striping.

Chapter 2 Synthetic Parallel Applications

2.1 Introduction Workload data is useful for file systems researchers, particularly for simulations of new algorithms and designs. This data is available in a variety of forms, such as traces and benchmarks. Traces can be logs of the behavior of the entire file system or as small as a single application. Traces can be quite large, especially if the application is very long-running or performs many actions. For this reason, it is difficult to create traces regularly for changing workloads and applications. Also, both because of their large sizes and the private information contained, they are difficult to share with researchers outside a particular organization.

Benchmarks are used to evaluate a system under a specific load. For convenience, benchmarks are often used many times to evaluate many different systems since there is a large cost in designing new appropriate benchmarks. Benchmarks are often designed as synthetic programs which don’t perform a necessarily useful programmatic function but stress the system in a specific way to determine certain characteristics of the system, such as peak I/O bandwidth.

While this type of benchmark is useful for comparing systems under specific requirements, they don’t capture system metrics under “typical” conditions of user applications on a system.

The ideal situation would be if we could release real user applications as benchmarks that then could be used to compare systems, which is conceptually the goal of the first part of this project.

We created a tool that creates an I/O skeleton program, called a Synthetic Parallel Application, from a real scientific program. This work was completed while working at Los Alamos National Laboratory.



Pages:   || 2 | 3 | 4 | 5 |   ...   | 7 |


Similar works:

«Sicherheit in der Biotechnologie Gefahrenpotentiale (Produktion) Biotechnologie Technische Chemie Toxische Stoffe in der Regel keine größere Explosive Stoffe keine mittlere Brennbare Stoffe geringe größere Korrosive Stoffe geringe mittlere Ätzende Stoffe geringe größere Karzinogene Stoffe keine größere Allergene Stoffe größere mittlere Schwerabbaubare Stoffe keine größere Hohe Drücke und Temperaturen keine größere Pathogene Organismen mittlere keine Rekombinante Organismen wenig...»

«Wärmeund Impulstransport in Schlicker-Reaktions-gesinterten Metallschäumen Von der Fakultät für Maschinenwesen der Rheinisch-Westfälischen Technischen Hochschule Aachen zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften genehmigte Dissertation vorgelegt von Jörg Sauerhering Berichter: Univ. Prof. Dr.-Ing. Robert Pitz-Paal Univ. Prof. Dr.-Ing. Thomas Wetzel Tag der mündlichen Prüfung: 12.01.2012 Diese Dissertation ist auf den Internetseiten der...»

«D Bedienungsanleitung Instruction manual GB RC 1200 Ottimo Gourmet Raclette D – Inhaltsverzeichnis GB – Contents page Seite Product description Produktbeschreibung Introduction Einleitung Intended use Bestimmungsgemäßer Gebrauch Technical data Technische Daten Packing material Verpackungsmaterial For your safety Für Ihre Sicherheit General safety advices Allgemeine Sicherheitshinweise Safety advices for using Sicherheitshinweise zum Gebrauch the appliance des Gerätes Prior to initial...»

«Ursula Klein Technology and Culture, Volume 55, Number 3, July 2014, pp. 591-621 (Article) DOI: 10.1353/tech.2014.0072 For additional information about this article http://muse.jhu.edu/journals/tech/summary/v055/55.3.klein.html Access provided by Max Planck Digital Library (29 Aug 2014 06:05 GMT) Depersonalizing the Arcanum URSULA KLEIN The Renaissance and early-modern courts and the eighteenth-century state bureaucracies in continental Europe often engaged “experts” for technically...»

«Electrical Engineering and Computer Science Department Technical Report NWU-EECS-06-01 February 8, 2006 Active Source Estimation for Improved Source Separation John Woodruff, Bryan Pardo Abstract Recent work in blind source separation applied to anechoic mixtures of speech allows for reconstruction of sources that rarely overlap in a time-frequency representation. While the assumption that speech mixtures do not overlap significantly in time-frequency is reasonable, music mixtures rarely meet...»

«Please quote as: Köbler, F.; Riedl, C.; Vetter, C.; Leimeister, J. M. & Krcmar, H. (2010): Social Connectedness on Facebook An explorative study on status message usage. In: 16. Americas Conference on Information Systems (AMCIS) 2010, Lima, Peru. Americas Conference on Information Systems (AMCIS) AMCIS 2010 Proceedings Association for Information Systems Year 2010 Social Connectedness on Facebook – An Explorative Study on Status Message Usage Felix K¨bler∗ Christoph Riedl† C´line...»

«Die geologische Entwicklung der HanauSeligenstädter Senke (Hessen, Bayern) Vom Fachbereich 11 Materialund Geowissenschaften der Technischen Universität Darmstadt genehmigte Dissertation zur Erlangung des akademischen Grades Doktor der Naturwissenschaften (Dr. rer. nat.) vorgelegt von Diplom-Geologe Stefan Lang geboren am 24. Februar 1975 in Bietigheim-Bissingen Erstreferent: Prof. Dr. Andreas Hoppe Koreferent: Prof. Dr. Matthias Hinderer Tag der Einreichung: 03.11.2006 Tag der Disputation:...»

«UA/SAF (09/06) PROJECT NUMBER(S) 04-182-150-408 (Applicant ID) (Activity ) (Fund) (Purpose) Fiscal Year 2007 2008 Uniform Application for State Administered Funds 1. FUNDING SOURCE AND CATEGORY: (please check funding source and enter appropriate category) Adult Education and Family Literacy Act General Purpose Revenue Carl D. Perkins Career and Technical Education Act Other Title II – Tech Prep Grant Category: 2. GRANT YEAR: (check one) 1st Year 2nd Year 3rd Year Other If this is a REVISION,...»

«Distributed Virtual Data Center for Enterprise and Service Provider Cloud Author: Yves Louis – November 2011 I would like to acknowledge Max Ardica, Patrice Bellagamba and Victor Moreno for their significant contributions on the Data Center Interconnect reference architecture described in this paper. Without their great technical expertise this document would not exist! NOTICE This document may contain proprietary information protected by copyright. Information in this article is subject to...»

«Please quote as: Riedl, C.; Blohm, I.; Leimeister, J. M. & Krcmar, H. (2010): Rating Scales for Collective Intelligence in Innovation Communities: Why Quick and Easy Decision Making Does Not Get it Right. In: 30. First International Conference on Information Systems (ICIS) 2010, St. Louis, MO, USA.RATING SCALES FOR COLLECTIVE INTELLIGENCE IN INNOVATION COMMUNITIES: WHY QUICK AND EASY DECISION MAKING DOES NOT GET IT RIGHT Completed Research Paper Christoph Riedl Ivo Blohm Technische Universität...»





 
<<  HOME   |    CONTACTS
2016 www.book.dislib.info - Free e-library - Books, dissertations, abstract

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.