Top White Papers

Driving competitive advantage by predicting the future

Competitive advantage is derived by an organization when it develops strategies, techniques, or resources that allow it to outperform...

Continue Reading Here

Four strategies to reduce your open source risk

Try to think of a single system in the world that hasn’t been touched by open source software...

Continue Reading Here

White Papers

Time series analysis Auto Arima

Time series analysis is used by many industries in order to extract meaningful statistics, characteristics, and insights. Businesses use time series to improve business performance or mitigate risk in applications such as finance, weather prediction, cell tower capacity planning, pattern recognition, signal processing, and engineering.

Continue Reading Here

TotalView CUDA

CUDA introduces developers to a number of new concepts (such as kernels, streams, warps, and explicitly multilevel memory) that are not encountered in serial or other parallel programming paradigms. Visibility into these elements is critical for troubleshooting and tuning applications that make use of CUDA. This paper will highlight CUDA concepts implemented in CUDA 3.0 - 4.0, the impact of those concepts for troubleshooting CUDA, and how TotalView helps users deal with these new CUDA-specific constructs. CUDA is frequently used alongside MPI parallelism and host-side multicore and multithread parallelism. The TotalView parallel debugger provides developers with an integrated view of all three levels of parallelism within a single debugging session.

Continue Reading Here

Transitioning to multicore: Part II

Multicore systems are ubiquitous; it’s virtually impossible to buy even commodity computers without a dual, quad, or hex-core processor. It won’t be long before many-core processors start to be prevalent as well. Each core in a multicore processor is capable of executing a program, so a quad-core processor can run four separate programs at the same time. That’s great if you have many different programs you need to run at one time, but can become a problem when you need performance from a single program. Those four cores can also potentially run one program faster than a single core processor would, but only if the program is written correctly. If you run a sequential (or serial) program written for single core architectures on a multicore platform, it will generally only be able to leverage a single core. Serial programs don’t run any faster, and may even run slightly slower, on multicore processors.

Continue Reading Here

Many integrated core debugging

Intel® Xeon® Phi™ coprocessors present an exciting opportunity for HPC developers to take advantage of many-core processor technology. Since the Intel Xeon Phi coprocessor shares many architectural features and much of the development tool chain with multicore Intel Xeon processors, it is generally fairly easy to migrate a program to the Intel Xeon Phi coprocessor. However, to fully leverage the Intel Xeon Phi coprocessor, a new level of parallelism needs to be expressed which may require significantly rethinking algorithms. Scientists need tools that support debugging and optimizing hybrid MPI/OpenMP parallel applications that may have dozens or even hundreds of threads per node.

Continue Reading Here

Current visual data analysis techniques

The term visualization was first coined in the July 1987 Visualization in Scientific Computing (ViSC) report. The National Science Foundation defines it as “a method of computing that offers a way for seeing the unseen. It enriches the process of discovery and fosters profound and unexpected insights.” However, current visualization techniques, such as realistic rendering, are primarily geared to geometric sources of data such as CAD and two- and three-dimensional modeling sources.

Continue Reading Here

The value of visual data analysis

Decisions have to be made based on data that are voluminous, complex, and coming at a very rapid rate. The raw numbers, in records or tables, are overwhelming. That’s why many engineers and researchers are turning to visual data analysis (VDA) software, such as PV-WAVE, to better analyze their data. These tools combine state-of-the-art graphics, data access, data management, and analytical techniques into a highly interactive environment that can interpret large amounts of data quickly.

Continue Reading Here

VDA tool technology unleashed

For many PV-WAVE programmers, their exposure to VDA tools consists of running the navigator and using it as an interactive point-and-click method of accessing PV-WAVE functionality. This is the intention of the navigator certainly, but only one way of making use of the underlying technology. Some users also may invoke the individual VDA tools (Wz routines) interactively, such as with the command: "WzPlot, DIST(10)". This is another way of using VDA tools in an ad-hoc, interactive way.

Continue Reading Here

VDA tool technology

The VDA tool architecture is a framework for developing cross-platform applications with graphic user interfaces (GUIs) in PV-WAVE. It is intended to put the VDA tool architecture into context and position the PV-WAVE environment as a development tool.

Continue Reading Here

Displaying results 1-10 (of 75)
 |<  < 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8  >  >|