From 96a2a45003a3a49f4250be5aa79362f07d8e3925 Mon Sep 17 00:00:00 2001 From: "FAZELI SHAHROUDI Sepehr (INTERN)" <Sepehr.FAZELISHAHROUDI.intern@3ds.com> Date: Mon, 17 Mar 2025 09:03:00 +0100 Subject: [PATCH] Add: implementation Chapter --- chapters/3-Implementation.tex | 112 +----- .../Chapter-3-sections => outdated}/Tasks.tex | 0 .../performance-metrics.tex | 0 .../Chapter-3-sections/Image-Conversion.tex | 65 ++++ .../Libraries-Implementation.tex | 359 ++++++++++++++++++ .../Chapter-3-sections/Memory-Profiling.tex | 68 ++++ .../Chapter-3-sections/Pixel-Iteration.tex | 82 ++++ sections/Chapter-3-sections/Result-Export.tex | 42 ++ .../System-Architecture.tex | 58 +++ 9 files changed, 682 insertions(+), 104 deletions(-) rename {sections/Chapter-3-sections => outdated}/Tasks.tex (100%) rename {sections/Chapter-3-sections => outdated}/performance-metrics.tex (100%) create mode 100644 sections/Chapter-3-sections/Image-Conversion.tex create mode 100644 sections/Chapter-3-sections/Libraries-Implementation.tex create mode 100644 sections/Chapter-3-sections/Memory-Profiling.tex create mode 100644 sections/Chapter-3-sections/Pixel-Iteration.tex create mode 100644 sections/Chapter-3-sections/Result-Export.tex create mode 100644 sections/Chapter-3-sections/System-Architecture.tex diff --git a/chapters/3-Implementation.tex b/chapters/3-Implementation.tex index 7df8ace..3d5cf6e 100644 --- a/chapters/3-Implementation.tex +++ b/chapters/3-Implementation.tex @@ -1,113 +1,17 @@ \chapter{Implementation} -In this chapter, we describe the practical realization of our benchmark for comparing image processing libraries. The benchmark focuses on two primary performance metrics—image loading time and pixel iteration time—which are critical in industrial applications where processing speed and resource efficiency are paramount. This chapter is organized into several sections. First, we provide an overview of the benchmark implementation and the rationale behind the chosen metrics. Next, we detail the code structure and key components, followed by a discussion on the measurement methodology and the processing and analysis of the results. Finally, we explain the selection criteria for the libraries under comparison. +This chapter details the implementation of a comprehensive benchmarking framework to evaluate several image processing libraries, including ImageSharp, OpenCvSharp paired with SkiaSharp, Emgu CV coupled with Structure.Sketching, and Magick.NET integrated with MagicScaler. The objective was to create an endâ€toâ€end system that not only measures execution times for common image operations but also provides insights into memory usage. +This has been sought to answer key questions regarding the efficiency of image conversion and pixel iteration operations—two fundamental tasks in image processing. The following sections describe the review process, architectural decisions, and technical implementations in the study.extensive study. +\input{sections/Chapter-3-sections/System-Architecture.tex} -\section{Benchmark Implementation Overview} +\input{sections/Chapter-3-sections/Image-Conversion.tex} -The primary objective of our benchmark is to quantify and compare the performance of various image processing libraries under controlled conditions. Two metrics were defined for this purpose: +\input{sections/Chapter-3-sections/Pixel-Iteration.tex} -\begin{itemize} - \item \textbf{Image Loading Time:} This metric measures the time required to read an image file from disk into memory. In many industrial applications, rapid image loading is essential—for example, when images are acquired continuously from production lines for real-time quality control. - \item \textbf{Pixel Iteration Time:} This metric captures the time taken to iterate over every pixel in a loaded image and perform a simple operation, such as converting the image to grayscale. Pixel iteration is a proxy for evaluating the efficiency of low-level image processing operations, which are common in tasks like filtering, segmentation, and feature extraction. -\end{itemize} +\input{sections/Chapter-3-sections/Libraries-Implementation.tex} -These metrics were chosen because they represent two fundamental aspects of image processing performance: input/output efficiency and computational processing speed. While other metrics (such as encoding/decoding time or compression efficiency) could be considered, our focus on loading and pixel iteration provides a clear, measurable foundation that is directly relevant to many real-world industrial scenarios. +\input{sections/Chapter-3-sections/Memory-Profiling.tex} - - -\section{Code Structure and Key Components} - -The benchmark was implemented in C\# using the .NET Core framework. The implementation is modular, with separate components handling different aspects of the benchmark. Key modules include: - -\subsection{Test Harness} -A central test harness orchestrates the execution of the benchmark tests. It is responsible for: -\begin{itemize} - \item Initializing the testing environment. - \item Iterating through multiple runs to ensure statistical significance. - \item Invoking library-specific methods for image loading and pixel iteration. - \item Logging and aggregating timing results. -\end{itemize} - -\subsection{Image Loading Module} -This module is responsible for measuring the image loading time. For each library under test, the module: -\begin{itemize} - \item Reads a predetermined image file from disk using the library’s native image-loading functions. - \item Utilizes the \texttt{Stopwatch} class to capture the precise duration from the start of the load operation until the image is fully loaded into memory. - \item Repeats the process several times to calculate an average loading time and to identify any anomalies. -\end{itemize} - -\subsection{Pixel Iteration Module} -The pixel iteration module measures the time required to traverse the entire image and perform a basic pixel transformation (e.g., converting each pixel’s color value to grayscale). The module: -\begin{itemize} - \item Accesses the pixel data of the loaded image through library-specific APIs. - \item Iterates over the pixel array using a standard loop construct. - \item Applies a simple transformation to each pixel. - \item Measures the cumulative time taken for the entire iteration using the \texttt{Stopwatch} class. - \item Repeats the operation multiple times to ensure consistency and reliability of the results. -\end{itemize} - -\subsection{Data Aggregation and Logging} -Each test run logs the measured times to a results file and/or console output. Post-processing scripts then aggregate the data to calculate key statistical metrics such as mean, standard deviation, and confidence intervals. This structured logging ensures that the data analysis is both reproducible and transparent. - -Large fragments of the source code for these modules are included in the appendix. For instance, the file \textit{Emgu CV\_Structure.Sketching\_testing.cs} contains the implementation details for running tests using Emgu CV and Structure.Sketching, while \textit{Imagesharp\_Testing.cs} includes the analogous tests for ImageSharp. - - - -\section{Measurement Methodology} - -\subsection{Timing and Performance Tools} -To obtain precise measurements, our implementation relies on the .NET \texttt{Stopwatch} class, which offers high-resolution timing functionality. Each critical operation—whether loading an image or iterating through pixels—is wrapped within start and stop calls to capture the elapsed time accurately. The high resolution of the \texttt{Stopwatch} ensures that even operations on the order of milliseconds can be measured reliably. - -\subsection{Repetition and Statistical Rigor} -Recognizing that individual timing measurements can be affected by transient system load and other noise factors, each test is repeated a predetermined number of times (typically 100 iterations). The aggregated results are then statistically analyzed to produce average times and to compute variance. This repetition guarantees that our reported metrics are representative of typical performance and not skewed by outliers. - -\subsection{Data Processing and Analysis} -Once the timing data is collected, it is processed using both built-in .NET data structures and external tools such as Excel or Python scripts for more in-depth statistical analysis. The processing involves: -\begin{itemize} - \item Calculating the arithmetic mean of the recorded times. - \item Determining the standard deviation to understand variability. - \item Generating graphs and charts that visually compare the performance across different libraries. - \item Performing comparative analysis to highlight strengths and weaknesses, and to identify any statistically significant differences. -\end{itemize} - -This detailed analysis helps in understanding not only which library performs best on average but also how consistent each library’s performance is over multiple runs. - - - -\section{Library Selection Criteria} - -The libraries selected for comparison in this benchmark were chosen based on several key criteria: - -\subsection{Market Adoption and Community Support} -Libraries with a broad user base and active community support were prioritized. This ensures that the libraries are well-maintained and have extensive documentation and real-world usage examples. - -\subsection{Feature Set and Functionality} -The libraries were evaluated based on the breadth and depth of their image processing capabilities. Key features include: -\begin{itemize} - \item Support for multiple image formats. - \item Efficient handling of image loading and pixel-level operations. - \item Availability of hardware acceleration features. -\end{itemize} -Libraries that provide a rich set of features and that have proven utility in industrial applications were deemed more favorable. - -\subsection{Performance and Resource Efficiency} -Preliminary benchmarks and documented performance metrics played an important role in the selection process. Libraries that demonstrated superior performance in prior studies—especially in terms of speed and low memory usage—were included in the evaluation. - -\subsection{Compatibility and Integration} -Ease of integration with existing systems is a critical consideration for industrial applications. The selected libraries must be compatible with modern development frameworks (such as .NET Core) and support a modular architecture that facilitates easy benchmarking and subsequent deployment. - -Based on these criteria, the libraries chosen for this benchmark include ImageSharp, Emgu CV, and Structure.Sketching, among others. Each library was subjected to the same set of tests under identical conditions to ensure a fair and objective comparison. - - - -% \section{Summary of Implementation} - -% The benchmark implementation described in this chapter is designed to provide a rigorous, reproducible, and fair comparison of image processing libraries based on two critical metrics: image loading time and pixel iteration time. By implementing a modular test harness in C\#, leveraging high-resolution timing mechanisms, and adopting a statistically rigorous approach to data collection, the benchmark delivers reliable performance data that can guide the selection of the most suitable library for industrial image processing applications. - -% Larger fragments of the source code that implement these tests can be found in the Appendix, ensuring that all details are available for verification and further analysis. - - - -This chapter has detailed the technical implementation of the benchmark, including the rationale behind the selected metrics, the code structure, the measurement methodology, and the criteria used for library selection. The comprehensive approach ensures that the performance comparisons presented later in this thesis are both robust and reliable, thereby providing a solid foundation for the evaluation of image processing libraries in industrial contexts. \ No newline at end of file +\input{sections/Chapter-3-sections/Result-Export.tex} \ No newline at end of file diff --git a/sections/Chapter-3-sections/Tasks.tex b/outdated/Tasks.tex similarity index 100% rename from sections/Chapter-3-sections/Tasks.tex rename to outdated/Tasks.tex diff --git a/sections/Chapter-3-sections/performance-metrics.tex b/outdated/performance-metrics.tex similarity index 100% rename from sections/Chapter-3-sections/performance-metrics.tex rename to outdated/performance-metrics.tex diff --git a/sections/Chapter-3-sections/Image-Conversion.tex b/sections/Chapter-3-sections/Image-Conversion.tex new file mode 100644 index 0000000..f46035d --- /dev/null +++ b/sections/Chapter-3-sections/Image-Conversion.tex @@ -0,0 +1,65 @@ +\section{Benchmarking Implementation} + +The implementation of the benchmarking framework is divided into two main tests: the image conversion benchmark and the pixel iteration benchmark. Both tests follow a similar structure, starting with a warm-up phase to mitigate initialization effects, followed by a series of iterations where performance metrics are recorded. + +\subsection{Image Conversion Benchmark Implementation} + +The image conversion benchmark is designed to measure the time it takes to load an image from disk, convert its format, and save the result. This process is critical in many image processing pipelines, where quick and efficient conversion between formats can significantly impact overall throughput. + +The code snippet below illustrates the core routine for this benchmark. The process begins with a series of warm-up iterations, during which the system’s just-in-time (JIT) compilation and caching mechanisms are activated. After the warm-up phase, the main iterations are executed, with each iteration logging the time taken for the conversion. + +\begin{lstlisting}[language={[Sharp]C}, caption={Image conversion benchmark implementation (ImageSharp-Testing.cs)}] +public class ImageConversionBenchmark +{ + public static (double warmupTime, double averageTime, double totalTime) RunBenchmark(string inputPath, string outputPath, int iterations) + { + long totalElapsedMilliseconds = 0; + long warmupTime = 0; + int warmupIterations = 5; + Stopwatch stopwatch = new Stopwatch(); + + // Warm-up iterations to allow the system to reach steady state. + for (int i = 0; i < warmupIterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + using (Image image = Image.Load(inputPath)) + { + using (FileStream fs = new FileStream(outputPath, FileMode.Create)) + { + image.Save(fs, new PngEncoder()); + } + } + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; + } + + // Main iterations where actual performance data is collected. + for (int i = 0; i < iterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + using (Image image = Image.Load(inputPath)) + { + using (FileStream fs = new FileStream(outputPath, FileMode.Create)) + { + image.Save(fs, new PngEncoder()); + } + } + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Image conversion took {stopwatch.ElapsedMilliseconds} ms"); + } + + double averageTime = totalElapsedMilliseconds / (double)iterations; + double totalTime = warmupTime + totalElapsedMilliseconds; + Console.WriteLine($"Warm-up: {warmupTime} ms, Average: {averageTime} ms, Total: {totalTime} ms"); + + return (warmupTime, averageTime, totalTime); + } +} +\end{lstlisting} + +In the code, the warm-up phase runs for five iterations. Each iteration loads the image, saves it as a PNG, and then accumulates the elapsed time. After the warm-up, the main test performs 100 iterations of the same operation, allowing us to compute an average execution time. The rationale behind this design is to isolate the steady-state performance from any one-time overhead, ensuring that the reported metrics reflect the true operational cost of image conversion. + +The story behind this implementation is one of iterative refinement. Early tests revealed that the initial iterations were significantly slower, prompting the introduction of the warm-up phase. Over time, it has been refined the benchmarking routine to ensure that every iteration is as isolated as possible, thereby reducing the influence of transient system states. \ No newline at end of file diff --git a/sections/Chapter-3-sections/Libraries-Implementation.tex b/sections/Chapter-3-sections/Libraries-Implementation.tex new file mode 100644 index 0000000..2b37786 --- /dev/null +++ b/sections/Chapter-3-sections/Libraries-Implementation.tex @@ -0,0 +1,359 @@ +\section{Libraries Implementation} +As discussed in the Methodology chapter, a comprehensive evaluation was undertaken to assess the strengths and limitations of various image processing libraries. This analysis informed the decision to implement integrations for frameworks: OpenCvSharp with SkiaSharp, and Emgu CV with Structure.Sketching, and Magick.NET with MagicScaler. The following excerpt presents representative code segments that illustrate the implementation strategies developed for these libraries. These segments not only capture the theoretical rationale behind each implementation approach but also reflect the practical constraints and performance considerations addressed throughout the thesis. This compilation of code serves as a testament to the systematic, experimental, and iterative nature of the research, highlighting the rigorous engineering process that underpinned the development of a robust image processing benchmarking framework. + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +\subsection{OpenCvSharp and SkiaSharp Implementation} + +The following implementation shows how the OpenCvSharp and SkiaSharp libraries are integrated to perform image conversion and pixel iteration tasks. Image conversion was implemented using OpenCvSharp, while pixel iteration was implemented using SkiaSharp. + +\begin{lstlisting}[language={[Sharp]C}, caption={SkiaSharp Implementation (RunBenchmark Method)}, label={lst:skia_sharp_RunBenchmarkMethod}] +using OpenCvSharp; +using SkiaSharp; + +// Image Conversion logic using SkiaSharp +public class ImageConversionBenchmark +{ + public static (double warmupTime, double averageTimeExcludingWarmup, double totalTimeIncludingWarmup) RunBenchmark(string inputPath, string outputPath, int iterations) +\end{lstlisting} + +The ImageConversionBenchmark class contains a static method RunBenchmark that takes the input image path, output image path, and number of iterations as input parameters. The method returns a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up, which will be used to form the results Excel file. + +\begin{lstlisting}[language={[Sharp]C}, caption={SkiaSharp Implementation (Initialization)}, label={lst:skia_sharp_initialization}] + { + long totalElapsedMilliseconds = 0; + long warmupTime = 0; + int warmupIterations = 5; + Stopwatch stopwatch = new Stopwatch(); +\end{lstlisting} + +First, the totalElapsedMilliseconds and warmupTime variables are initialized and as discussed in the methodology chapter, the warmupIterations are set to 5. A stopwatch object is created to measure the elapsed time for each iteration. + +\begin{lstlisting}[language={[Sharp]C}, caption={SkiaSharp Implementation (Warm-up Iterations)}] + // Warm-up iterations + for (int i = 0; i < warmupIterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = Cv2.ImRead(inputPath, ImreadModes.Color)) + { + Cv2.ImWrite(outputPath, image); + } + + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; + } +\end{lstlisting} + +The warm-up phase is executed five times to ensure that the libraries are fully initialized before the main iterations begin. In each iteration, the code reads an image using \texttt{Cv2.ImRead}, and writes the image using \texttt{Cv2.ImWrite}. The elapsed time for each iteration is recorded using the stopwatch object. + +\begin{lstlisting}[language={[Sharp]C}, caption={SkiaSharp Implementation (Main Iterations)}] + // Main iterations + for (int i = 0; i < iterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = Cv2.ImRead(inputPath, ImreadModes.Color)) + { + Cv2.ImWrite(outputPath, image); + } + + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Image conversion took {stopwatch.ElapsedMilliseconds} ms"); + } +\end{lstlisting} + +After the warm-up phase, the main iterations are executed, and the elapsed time for each iteration is recorded. The results are then aggregated and returned as a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up. + +\begin{lstlisting}[language={[Sharp]C}, caption={SkiaSharp Implementation (Results Calculation)}] + double averageTimeExcludingWarmup = totalElapsedMilliseconds / (double)iterations; + double totalTimeIncludingWarmup = warmupTime + totalElapsedMilliseconds; + + Console.WriteLine($"Warm-up time for image conversion: {warmupTime} ms"); + Console.WriteLine($"Average time excluding warm-up for image conversion: {averageTimeExcludingWarmup} ms"); + Console.WriteLine($"Total time including warm-up for image conversion: {totalTimeIncludingWarmup} ms"); + + return (warmupTime, averageTimeExcludingWarmup, totalTimeIncludingWarmup); + } +} +\end{lstlisting} + +Finally, the average time excluding warm-up, total time including warm-up, and warm-up time are calculated. These values are then printed to the console and returned as a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up. + +The pixel iteration benchmark, on the other hand, uses SkiaSharp to perform pixel-wise operations on the image. + +Same as the image conversion benchmark, the pixel iteration benchmark is implemented as a static method RunBenchmark that takes the image path and the number of iterations as input parameters. The method returns a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up. And in the same way variables are initialized. + +\begin{lstlisting}[language={[Sharp]C}, caption={OpenCvSharp Implementation (Warm-up Iterations)}] + // Warm-up iterations + for (int i = 0; i < warmupIterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = Cv2.ImRead(imagePath, ImreadModes.Color)) + { + for (int y = 0; y < image.Rows; y++) + { + for (int x = 0; x < image.Cols; x++) + { + var pixel = image.At<Vec3b>(y, x); + byte gray = (byte)((pixel.Item0 + pixel.Item1 + pixel.Item2) / 3); + image.Set(y, x, new Vec3b(gray, gray, gray)); + } + } + } + + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; + } +\end{lstlisting} + +The warm-up phase is executed five times to ensure that the libraries are fully initialized before the main iterations begin. In each iteration, the code reads an image using \texttt{Cv2.ImRead}, iterates over each pixel, calculates the grayscale value, and then sets the pixel value using \texttt{image.At<Vec3b>} and \texttt{image.Set}. The elapsed time for each iteration is recorded using the stopwatch object. + +\begin{lstlisting}[language={[Sharp]C}, caption={OpenCvSharp Implementation (Main Iterations)}] + // Main iterations + for (int i = 0; i < iterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = Cv2.ImRead(imagePath, ImreadModes.Color)) + { + for (int y = 0; y < image.Rows; y++) + { + for (int x = 0; x < image.Cols; x++) + { + var pixel = image.At<Vec3b>(y, x); + byte gray = (byte)((pixel.Item0 + pixel.Item1 + pixel.Item2) / 3); + image.Set(y, x, new Vec3b(gray, gray, gray)); + } + } + } + + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Pixel iteration took {stopwatch.ElapsedMilliseconds} ms"); + } +\end{lstlisting} + +After the warm-up phase, the main iterations are executed, using the same logic as the warm-up phase. The elapsed time for each iteration is recorded, and the results are then aggregated and returned as a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up. + +\begin{lstlisting}[language={[Sharp]C}, caption={OpenCvSharp Implementation (Results Calculation)}] + double averageTimeExcludingWarmup = totalElapsedMilliseconds / (double)iterations; + double totalTimeIncludingWarmup = warmupTime + totalElapsedMilliseconds; + + Console.WriteLine($"Warm-up time for pixel iteration: {warmupTime} ms"); + Console.WriteLine($"Average time excluding warm-up for pixel iteration: {averageTimeExcludingWarmup} ms"); + Console.WriteLine($"Total time including warm-up for pixel iteration: {totalTimeIncludingWarmup} ms"); + + return (warmupTime, averageTimeExcludingWarmup, totalTimeIncludingWarmup); + } +} +\end{lstlisting}   + +Finally, the average time excluding warm-up, total time including warm-up, and warm-up time are calculated. These values are then printed to the console and returned as a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up. The returned values are then used to generate the results in an Excel file. + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +\subsection{Magick.NET Implementation} + +In the implementation of both image conversion and pixel iteration benchmarks, Magick.NET library was used. This decision was based on Magick.NET's comprehensive functionality, which includes support for high-quality image conversion and efficient pixel-wise operations. \\ + +Similar to the previous section on OpenCvSharp and SkiaSharp, the ImageConversionBenchmark class for Magick.NET features a static RunBenchmark method. In this method, the necessary variables are initialized to measure and record the performance of image conversion operations. This consistent approach across libraries facilitates a clear comparison of their performance under similar conditions.\\ + +In logic for the warm-up phase and main iterations, change was only the library-specific functions used for image conversion and pixel iteration. Implementing image conversion using Magick.NET involved reading an image using \texttt{new MagickImage(inputPath)} to read an image and \texttt{image.Write(outputPath, MagickFormat.Png)} to write an image, and the image conversion benchmark was implemented. + +\begin{lstlisting}[language={[Sharp]C}, caption={Magick.NET Implementation (Image Conversion)}, label={lst:magicknet_imageconversion}] +// Warm-up iterations +for (int i = 0; i < warmupIterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = new MagickImage(inputPath)) + { + image.Write(outputPath, MagickFormat.Png); + } + + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; +} + +// Main iterations +for (int i = 0; i < iterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = new MagickImage(inputPath)) + { + image.Write(outputPath, MagickFormat.Png); + } + + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Image conversion took {stopwatch.ElapsedMilliseconds} ms"); +} +\end{lstlisting} + +The pixel iteration benchmark was implemented by first retrieving the pixel data using the \texttt{image.GetPixels()} method. Then, for each pixel, the color channels were set to the same gray value using the \texttt{pixels.SetPixel(x, y, new ushort[] \{ gray, gray, gray \})} function. This process was repeated for each pixel in the image for both the warm-up phase and the main iterations. + +\begin{lstlisting}[language={[Sharp]C}, caption={Magick.NET Implementation (Pixel Iteration)}, label={lst:magicknet_pixel_iteration}] +// Warm-up iterations +for (int i = 0; i < warmupIterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = new MagickImage(imagePath)) + { + var pixels = image.GetPixels(); + for (int y = 0; y < image.Height; y++) + { + for (int x = 0; x < image.Width; x++) + { + var pixel = pixels.GetPixel(x, y); // Get pixel data + ushort gray = (ushort)((pixel[0] + pixel[1] + pixel[2]) / 3); // Convert to grayscale + pixels.SetPixel(x, y, new ushort[] { gray, gray, gray }); // Set pixel data with ushort[] + } + } + } + + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; +} + +// Main iterations +for (int i = 0; i < iterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + using (var image = new MagickImage(imagePath)) + { + var pixels = image.GetPixels(); + for (int y = 0; y < image.Height; y++) + { + for (int x = 0; x < image.Width; x++) + { + var pixel = pixels.GetPixel(x, y); // Get pixel data + ushort gray = (ushort)((pixel[0] + pixel[1] + pixel[2]) / 3); // Convert to grayscale + pixels.SetPixel(x, y, new ushort[] { gray, gray, gray }); // Set pixel data with ushort[] + } + } + } + + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Pixel iteration took {stopwatch.ElapsedMilliseconds} ms"); +} +\end{lstlisting} + +The results of the image conversion and pixel iteration benchmarks were then like the previous libraries, aggregated and returned as a tuple containing the warm-up time, average time excluding warm-up, and total time including warm-up. These values were then used to generate the results in an Excel file. + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +\subsection{Emgu CV and Structure.Sketching Implementation} + +The implementation of Emgu CV and Structure.Sketching libraries in the benchmarking framework are shown in the following code snippet. The code demonstrates how the Emgu CV library is used for image conversion, while Structure.Sketching is used for pixel iteration. + +For image conversion, the code reads an image using \texttt{CvInvoke.Imread} and writes the image using \texttt{CvInvoke.Imwrite}. The warm-up phase and main iterations are executed in a similar manner to the previous libraries, with the elapsed time for each iteration recorded using a stopwatch object. + +\begin{lstlisting}[language={[Sharp]C}, caption={Emgu CV Implementation (Image Conversion)}, label={lst:emgu_cv_structure_sketching_image_conversion}] +using Emgu.CV; +using Emgu.CV.CvEnum; +using Emgu.CV.Structure; +using Structure.Sketching; +using Structure.Sketching.Formats; +using Structure.Sketching.Colors; + +// Warm-up iterations +for (int i = 0; i < warmupIterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + using (Mat image = CvInvoke.Imread(inputPath, ImreadModes.Color)) + { + CvInvoke.Imwrite(outputPath, image); + } + + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; +} + +// Main iterations +for (int i = 0; i < iterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + using (Mat image = CvInvoke.Imread(inputPath, ImreadModes.Color)) + { + CvInvoke.Imwrite(outputPath, image); + } + + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Image conversion took {stopwatch.ElapsedMilliseconds} ms"); +} +\end{lstlisting} + +For pixel iteration, it uses the Structure.Sketching and the code reads an image using \texttt{new Structure.Sketching.Image(imagePath)} and iterates over each pixel, calculating the grayscale value and setting the pixel value using \texttt{image.Pixels[(y * width) + x]}. The warm-up phase and main iterations are executed in a similar manner to the previous libraries, with the elapsed time for each iteration recorded using a stopwatch object. + +\begin{lstlisting}[language={[Sharp]C}, caption={Structure.Sketching Implementation (Pixel Iteration)}, label={lst:emgu_cv_structure_sketching_pixel_iteration}] +// Warm-up iterations +for (int i = 0; i < warmupIterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + var image = new Structure.Sketching.Image(imagePath); + int width = image.Width; + int height = image.Height; + + for (int y = 0; y < height; y++) + { + for (int x = 0; x < width; x++) + { + var pixel = image.Pixels[(y * width) + x]; + byte gray = (byte)((pixel.Red + pixel.Green + pixel.Blue) / 3); + image.Pixels[(y * width) + x] = new Color(gray, gray, gray, pixel.Alpha); + } + } + + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; +} + +// Main iterations +for (int i = 0; i < iterations; i++) +{ + stopwatch.Reset(); + stopwatch.Start(); + + var image = new Structure.Sketching.Image(imagePath); + int width = image.Width; + int height = image.Height; + + for (int y = 0; y < height; y++) + { + for (int x = 0; x < width; x++) + { + var pixel = image.Pixels[(y * width) + x]; + byte gray = (byte)((pixel.Red + pixel.Green + pixel.Blue) / 3); + image.Pixels[(y * width) + x] = new Color(gray, gray, gray, pixel.Alpha); + } + } + + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Pixel iteration took {stopwatch.ElapsedMilliseconds} ms"); +} +\end{lstlisting} + +Grayscale conversion is performed on each pixel by computing the average of the red, green, and blue components using the formula \texttt{(byte)((pixel.Red + pixel.Green + pixel.Blue) / 3)}. The grayscale value is then assigned to each color channel to create a grayscale image. The benchmarking process collects the results from both image conversion and pixel iteration. These results are aggregated into a tuple containing the warm-up time, the average time (excluding the warm-up phase), and the total time (including the warm-up phase). Finally, this data is used to generate an Excel file that summarizes the performance metrics. diff --git a/sections/Chapter-3-sections/Memory-Profiling.tex b/sections/Chapter-3-sections/Memory-Profiling.tex new file mode 100644 index 0000000..d5f2412 --- /dev/null +++ b/sections/Chapter-3-sections/Memory-Profiling.tex @@ -0,0 +1,68 @@ +\section{Memory Profiling and Performance Analysis} + +In any high-performance image processing application, it is not enough to measure raw execution time; memory consumption is equally critical. This section describes the integration of memory profiling into the benchmarking framework to provide a comprehensive view of the performance characteristics of each library and complement the time-based measurements. Using BenchmarkDotNet—a powerful tool for .NET performance analysis—we captured detailed metrics on memory allocation and garbage collection behavior. This implementation allowed us to understand the trade-offs between processing speed and resource utilization. + +The memory profiling is designed to evaluate not only the mean execution times but also the memory allocated during both image conversion and pixel iteration tasks. Using BenchmarkDotNet’s \texttt{[MemoryDiagnoser]}, \texttt{[Orderer]}, and \texttt{[RankColumn]} attributes, data on memory consumption, garbage collection events, and total allocated memory were collected for each benchmarked operation. The BenchmarkDotNet analyzer for each method by default is configured to automatically determine how many warmup and measurement iterations to run based on the workload, environment, and statistical requirements for accurate measurements. So there is no need to implement a fixed iteration count for each method manually. + +The following framework demonstrates the implementation of memory profiling and an example of how the memory diagnostics were implemented for the image conversion and pixel iteration using ImageSharp: + +\begin{lstlisting}[language={[Sharp]C}, caption={Memory Profiling and Performance Analysis (ImageSharp)}, label={lst:memory-profiling}] +using BenchmarkDotNet.Attributes; +using BenchmarkDotNet.Order; +using BenchmarkDotNet.Running; +using SixLabors.ImageSharp; +using SixLabors.ImageSharp.Formats.Png; +using SixLabors.ImageSharp.PixelFormats; + +[MemoryDiagnoser] +[Orderer(SummaryOrderPolicy.FastestToSlowest)] +[RankColumn] +public class Benchmarks +{ + private const string InputImagePath = "./../../../../../xl1.jpg"; + private const string OutputImagePath = "./../../../../o.png"; + + [Benchmark] + public void ImageConversionBenchmark() + { + using (Image image = Image.Load(InputImagePath)) + { + using (FileStream fs = new FileStream(OutputImagePath, FileMode.Create)) + { + image.Save(fs, new PngEncoder()); + Console.WriteLine("ImageConversionBenchmark completed"); + } + } + } +\end{lstlisting} + +Same logic is used for image conversion, but there were no need for iterations and warm-up phase to be implemented manually. For configuring the \texttt{MemoryDiagnoser} results, \texttt{Orderer(SummaryOrderPolicy.FastestToSlowest)} and \texttt{RankColumn} attributes were used to order the results based on the fastest to slowest execution times and to rank the results in the summary table, respectively to provide a better and clearer view of the results. + +\begin{lstlisting}[language={[Sharp]C}, caption={Memory Profiling and Performance Analysis (ImageSharp)}, label={lst:memory-profiling}] +[Benchmark] +public void PixelIterationBenchmark() +{ + using (Image<Rgba32> image = Image.Load<Rgba32>(InputImagePath)) + { + int width = image.Width; + int height = image.Height; + + for (int y = 0; y < height; y++) + { + for (int x = 0; x < width; x++) + { + Rgba32 pixel = image[x, y]; + byte gray = (byte)((pixel.R + pixel.G + pixel.B) / 3); + image[x, y] = new Rgba32(gray, gray, gray, pixel.A); + } + } + Console.WriteLine("PixelIterationBenchmark completed"); + } +} +\end{lstlisting} + +The pixel iteration benchmark was implemented in a similar manner, with the same memory diagnostics attributes. The code snippet above demonstrates the pixel iteration benchmark for ImageSharp, where each pixel in the image is converted to grayscale. The memory diagnostics provided by BenchmarkDotNet allowed us to track the memory consumption and garbage collection events during the pixel iteration operation, providing valuable insights into the resource utilization of each library. + +This code exemplifies our approach to memory diagnostics. By annotating the benchmark class with \texttt{[MemoryDiagnoser]}, BenchmarkDotNet automatically collects data on memory usage—including the number of garbage collection (GC) events and the total allocated memory during each benchmarked operation. Similar implimentations were done for other libraries as well. + +This level of granularity provided insights that went beyond raw timing metrics, revealing, for example, that while Emgu CV might be faster in certain operations, its higher memory consumption could be a concern for applications running on memory-constrained systems. \ No newline at end of file diff --git a/sections/Chapter-3-sections/Pixel-Iteration.tex b/sections/Chapter-3-sections/Pixel-Iteration.tex new file mode 100644 index 0000000..75155d9 --- /dev/null +++ b/sections/Chapter-3-sections/Pixel-Iteration.tex @@ -0,0 +1,82 @@ +\subsection{Pixel Iteration Benchmark Implementation} + +The pixel iteration benchmark is equally critical, as it measures the time taken to perform a basic image processing operation—converting an image to grayscale by iterating over each pixel. This benchmark simulates real-world scenarios where complex filters and effects require individual pixel manipulation. + +For ImageSharp, the implementation involves loading the image as an array of pixels, processing each pixel to compute its grayscale value, and then updating the image accordingly. The following snippet provides a glimpse into this process: + +\begin{lstlisting}[language={[Sharp]C}, caption={Image conversion benchmark implementation (ImageSharp-Testing.cs)}] +using SixLabors.ImageSharp; +using SixLabors.ImageSharp.Formats.Png; +using SixLabors.ImageSharp.PixelFormats; +public class PixelIterationBenchmark +{ + public static (double warmupTime, double averageTime, double totalTime) RunBenchmark(string imagePath, int iterations) + { + long totalElapsedMilliseconds = 0; + long warmupTime = 0; + int warmupIterations = 5; + Stopwatch stopwatch = new Stopwatch(); + + // Warm-up phase for pixel iteration + for (int i = 0; i < warmupIterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + using (Image<Rgba32> image = Image.Load<Rgba32>(imagePath)) + { + int width = image.Width; + int height = image.Height; + for (int y = 0; y < height; y++) + { + for (int x = 0; x < width; x++) + { + Rgba32 pixel = image[x, y]; + byte gray = (byte)((pixel.R + pixel.G + pixel.B) / 3); + image[x, y] = new Rgba32(gray, gray, gray, pixel.A); + } + } + } + stopwatch.Stop(); + warmupTime += stopwatch.ElapsedMilliseconds; + } + + // Main iterations to measure pixel iteration performance + for (int i = 0; i < iterations; i++) + { + stopwatch.Reset(); + stopwatch.Start(); + using (Image<Rgba32> image = Image.Load<Rgba32>(imagePath)) + { + int width = image.Width; + int height = image.Height; + for (int y = 0; y < height; y++) + { + for (int x = 0; x < width; x++) + { + Rgba32 pixel = image[x, y]; + byte gray = (byte)((pixel.R + pixel.G + pixel.B) / 3); + image[x, y] = new Rgba32(gray, gray, gray, pixel.A); + } + } + } + stopwatch.Stop(); + totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds; + Console.WriteLine($"Iteration {i + 1}: Pixel iteration took {stopwatch.ElapsedMilliseconds} ms"); + } + + double averageTime = totalElapsedMilliseconds / (double)iterations; + double totalTime = warmupTime + totalElapsedMilliseconds; + Console.WriteLine($"Warm-up: {warmupTime} ms, Average: {averageTime} ms, Total: {totalTime} ms"); + + return (warmupTime, averageTime, totalTime); + } +} +\end{lstlisting} + +The code measures the performance of a grayscale conversion operation by iterating over each pixel of an image. As in the image conversion, it uses a timer (Stopwatch) and divides the process into two phases: a warm-up phase and a measurement phase. During the warm-up phase, the image is loaded and processed five times. This phase helps stabilize performance by mitigating any startup overheads. Each iteration involves loading the image, iterating over its width and height, reading each pixel, computing the grayscale value by averaging the red, green, and blue channels, and assigning the new grayscale value back while preserving the alpha channel. The use of the \texttt{using} statement ensures that the image is properly disposed after processing. + +In the measurement phase, the same processing occurs over a user-specified number of iterations (100 iterations). After running all iterations, the code calculates the average time per iteration and the total time including warm-up. This approach isolates the steady-state performance from any one-time overhead, resulting in more accurate measurements that reflect the true cost of pixel-by-pixel manipulations. + +The design emphasizes clear resource management, detailed timing, and separation of initialization costs from the main measurement, which are crucial when every microsecond of processing time matters in image manipulation scenarios. + +The main focus of the implementation was to capture the interplay between algorithmic efficiency and system-level resource management. Every pixel operation is executed in a closed loop, and even minor inefficiencies can accumulate over hundreds of iterations. The loop structure is designed and a stopwatch is used to measure elapsed time to matter of attention that should be paid to details during development. Because even in high-level libraries such as ImageSharp, every microsecond counts when processing large images. \ No newline at end of file diff --git a/sections/Chapter-3-sections/Result-Export.tex b/sections/Chapter-3-sections/Result-Export.tex new file mode 100644 index 0000000..935131b --- /dev/null +++ b/sections/Chapter-3-sections/Result-Export.tex @@ -0,0 +1,42 @@ +\section{Result Export and Data Aggregation} + +Once the performance and memory metrics were collected, the next challenge was to present the results in a coherent and accessible manner. For this purpose, Excel was chosen as the output format due to its widespread adoption and ease of use for further analysis. \texttt{OfficeOpenXml} namespace, which is part of the EPPlus library, allows for the creation and manipulation of Excel files in .NET applications. The ExcelExporter class was implemented to aggregate the benchmark results and export them to an Excel file. + +The code snippet below illustrates how the benchmark results are aggregated and exported to an Excel file: + +\begin{lstlisting}[language={[Sharp]C}, caption={Result Export and Data Aggregation}, label={lst:result-export}] +using OfficeOpenXml; + +public class ExcelExporter +{ + public static void ExportResults(string excelOutputPath, + (double warmupTime, double averageTime, double totalTime) imageConversionResults, + (double warmupTime, double averageTime, double totalTime) pixelIterationResults) + { + using (var package = new ExcelPackage()) + { + var worksheet = package.Workbook.Worksheets.Add("Benchmark Results"); + worksheet.Cells[1, 1].Value = "Benchmark"; + worksheet.Cells[1, 2].Value = "Warm-Up Time (ms)"; + worksheet.Cells[1, 3].Value = "Average Time (ms)"; + worksheet.Cells[1, 4].Value = "Total Time (ms)"; + + worksheet.Cells[2, 1].Value = "Image Conversion"; + worksheet.Cells[2, 2].Value = imageConversionResults.warmupTime; + worksheet.Cells[2, 3].Value = imageConversionResults.averageTime; + worksheet.Cells[2, 4].Value = imageConversionResults.totalTime; + + worksheet.Cells[3, 1].Value = "Pixel Iteration"; + worksheet.Cells[3, 2].Value = pixelIterationResults.warmupTime; + worksheet.Cells[3, 3].Value = pixelIterationResults.averageTime; + worksheet.Cells[3, 4].Value = pixelIterationResults.totalTime; + + package.SaveAs(new FileInfo(excelOutputPath)); + } + } +} +\end{lstlisting} + +The ExcelExporter class creates a structured Excel file with separate sheets for each benchmark operation. The results are organized into columns for the warm-up time, average time, and total time for each operation. The resulting Excel file provides a clear and concise summary of the benchmark results, making it easy to compare the performance and memory characteristics of each library. + +By automating the process of result aggregation, the framework not only saves time but also minimizes the risk of manual errors. Each cell in the generated Excel file is carefully populated with benchmark data, and the resulting spreadsheet can be easily imported into analytical tools for further exploration. This process of exporting results serves as a bridge between the raw performance data and the actionable insights that drive decision-making in software optimization. diff --git a/sections/Chapter-3-sections/System-Architecture.tex b/sections/Chapter-3-sections/System-Architecture.tex new file mode 100644 index 0000000..da12af1 --- /dev/null +++ b/sections/Chapter-3-sections/System-Architecture.tex @@ -0,0 +1,58 @@ +\section{System Architecture and Design Rationale} + +The design of our benchmarking framework was guided by the need for consistency, repeatability, and scientific severity. The system was architected to support multiple libraries through a common interface, ensuring that each library’s performance could be measured under identical conditions. At the core of our design was a twoâ€phase benchmarking process: an initial warm-up phase to account for any initialization overhead, followed by a main test phase where the actual performance metrics were recorded. + +In constructing the system, several important decisions were made. First, we employed a modular approach, separating the benchmarking routines into distinct components. This allowed us to encapsulate the logic for image conversion and pixel iteration into separate classes, each responsible for executing a series of timed iterations and logging the results. + +\begin{lstlisting}[language={[Sharp]C}, caption={Design of the benchmarking framework}] + public class ImageConversionBenchmark{ + + // Benchmarking logic for image conversion + } + public class PixelIterationBenchmark{ + + // Benchmarking logic for pixel iteration + } +\end{lstlisting} + +The architecture also included a dedicated component for result aggregation, which exported data into an Excel file using EPPlus, thereby facilitating further analysis and visualization. + +\begin{lstlisting}[language={[Sharp]C}, caption={Design of the benchmarking framework}] + using OfficeOpenXml; + + public class ExcelExporter{ + + // Logic for exporting benchmark results to an Excel sheet in a structured format + } +\end{lstlisting} + +An essential aspect of the design was the uniformity of testing. Despite the differences in methods of implementation among the libraries, the benchmarking framework was designed to abstract away these differences. Each library was integrated by implementing the same sequence of operations: reading an image from disk, processing the image (either converting its format or iterating over its pixels to apply a grayscale filter), and finally saving the processed image back to disk. This uniform methodology ensured that our performance comparisons were both fair and reproducible. + +The architecture also accounted for system-level factors such as memory management and garbage collection. For instance, in languages like C\#, where unmanaged resources must be explicitly disposed of, the design included rigorous cleanup routines to ensure that each iteration began with a clean slate. This attention to detail was crucial in obtaining accurate measurements, as any residual state from previous iterations could skew the results. + +\begin{lstlisting}[language={[Sharp]C}, caption={Design of the benchmarking framework}] + using BenchmarkDotNet.Attributes; + using BenchmarkDotNet.Running; + + class Program + { + static void Main(string[] args) + { + BenchmarkRunner.Run<Benchmarks>(); + } + } + + [MemoryDiagnoser] + public class Benchmarks{ + + [Benchmark] + public void ImageConversionBenchmark(){ + // Image conversion logic + } + + [Benchmark] + public void PixelIterationBenchmark(){ + // Pixel iteration logic + } + } +\end{lstlisting} -- GitLab