From e16597f038fcd232308c7d93b1d21c2fdd8e03a4 Mon Sep 17 00:00:00 2001 From: "FAZELI SHAHROUDI Sepehr (INTERN)" <Sepehr.FAZELISHAHROUDI.intern@3ds.com> Date: Sun, 2 Mar 2025 22:41:17 +0100 Subject: [PATCH] Add: implementation chapter --- chapters/3-Implementation.tex | 114 +++++++++++++++++++++++++++++++++- 1 file changed, 113 insertions(+), 1 deletion(-) diff --git a/chapters/3-Implementation.tex b/chapters/3-Implementation.tex index 0a4c866..7df8ace 100644 --- a/chapters/3-Implementation.tex +++ b/chapters/3-Implementation.tex @@ -1 +1,113 @@ -\chapter{Implementation} \ No newline at end of file +\chapter{Implementation} + +In this chapter, we describe the practical realization of our benchmark for comparing image processing libraries. The benchmark focuses on two primary performance metrics—image loading time and pixel iteration time—which are critical in industrial applications where processing speed and resource efficiency are paramount. This chapter is organized into several sections. First, we provide an overview of the benchmark implementation and the rationale behind the chosen metrics. Next, we detail the code structure and key components, followed by a discussion on the measurement methodology and the processing and analysis of the results. Finally, we explain the selection criteria for the libraries under comparison. + + + +\section{Benchmark Implementation Overview} + +The primary objective of our benchmark is to quantify and compare the performance of various image processing libraries under controlled conditions. Two metrics were defined for this purpose: + +\begin{itemize} + \item \textbf{Image Loading Time:} This metric measures the time required to read an image file from disk into memory. In many industrial applications, rapid image loading is essential—for example, when images are acquired continuously from production lines for real-time quality control. + \item \textbf{Pixel Iteration Time:} This metric captures the time taken to iterate over every pixel in a loaded image and perform a simple operation, such as converting the image to grayscale. Pixel iteration is a proxy for evaluating the efficiency of low-level image processing operations, which are common in tasks like filtering, segmentation, and feature extraction. +\end{itemize} + +These metrics were chosen because they represent two fundamental aspects of image processing performance: input/output efficiency and computational processing speed. While other metrics (such as encoding/decoding time or compression efficiency) could be considered, our focus on loading and pixel iteration provides a clear, measurable foundation that is directly relevant to many real-world industrial scenarios. + + + +\section{Code Structure and Key Components} + +The benchmark was implemented in C\# using the .NET Core framework. The implementation is modular, with separate components handling different aspects of the benchmark. Key modules include: + +\subsection{Test Harness} +A central test harness orchestrates the execution of the benchmark tests. It is responsible for: +\begin{itemize} + \item Initializing the testing environment. + \item Iterating through multiple runs to ensure statistical significance. + \item Invoking library-specific methods for image loading and pixel iteration. + \item Logging and aggregating timing results. +\end{itemize} + +\subsection{Image Loading Module} +This module is responsible for measuring the image loading time. For each library under test, the module: +\begin{itemize} + \item Reads a predetermined image file from disk using the library’s native image-loading functions. + \item Utilizes the \texttt{Stopwatch} class to capture the precise duration from the start of the load operation until the image is fully loaded into memory. + \item Repeats the process several times to calculate an average loading time and to identify any anomalies. +\end{itemize} + +\subsection{Pixel Iteration Module} +The pixel iteration module measures the time required to traverse the entire image and perform a basic pixel transformation (e.g., converting each pixel’s color value to grayscale). The module: +\begin{itemize} + \item Accesses the pixel data of the loaded image through library-specific APIs. + \item Iterates over the pixel array using a standard loop construct. + \item Applies a simple transformation to each pixel. + \item Measures the cumulative time taken for the entire iteration using the \texttt{Stopwatch} class. + \item Repeats the operation multiple times to ensure consistency and reliability of the results. +\end{itemize} + +\subsection{Data Aggregation and Logging} +Each test run logs the measured times to a results file and/or console output. Post-processing scripts then aggregate the data to calculate key statistical metrics such as mean, standard deviation, and confidence intervals. This structured logging ensures that the data analysis is both reproducible and transparent. + +Large fragments of the source code for these modules are included in the appendix. For instance, the file \textit{Emgu CV\_Structure.Sketching\_testing.cs} contains the implementation details for running tests using Emgu CV and Structure.Sketching, while \textit{Imagesharp\_Testing.cs} includes the analogous tests for ImageSharp. + + + +\section{Measurement Methodology} + +\subsection{Timing and Performance Tools} +To obtain precise measurements, our implementation relies on the .NET \texttt{Stopwatch} class, which offers high-resolution timing functionality. Each critical operation—whether loading an image or iterating through pixels—is wrapped within start and stop calls to capture the elapsed time accurately. The high resolution of the \texttt{Stopwatch} ensures that even operations on the order of milliseconds can be measured reliably. + +\subsection{Repetition and Statistical Rigor} +Recognizing that individual timing measurements can be affected by transient system load and other noise factors, each test is repeated a predetermined number of times (typically 100 iterations). The aggregated results are then statistically analyzed to produce average times and to compute variance. This repetition guarantees that our reported metrics are representative of typical performance and not skewed by outliers. + +\subsection{Data Processing and Analysis} +Once the timing data is collected, it is processed using both built-in .NET data structures and external tools such as Excel or Python scripts for more in-depth statistical analysis. The processing involves: +\begin{itemize} + \item Calculating the arithmetic mean of the recorded times. + \item Determining the standard deviation to understand variability. + \item Generating graphs and charts that visually compare the performance across different libraries. + \item Performing comparative analysis to highlight strengths and weaknesses, and to identify any statistically significant differences. +\end{itemize} + +This detailed analysis helps in understanding not only which library performs best on average but also how consistent each library’s performance is over multiple runs. + + + +\section{Library Selection Criteria} + +The libraries selected for comparison in this benchmark were chosen based on several key criteria: + +\subsection{Market Adoption and Community Support} +Libraries with a broad user base and active community support were prioritized. This ensures that the libraries are well-maintained and have extensive documentation and real-world usage examples. + +\subsection{Feature Set and Functionality} +The libraries were evaluated based on the breadth and depth of their image processing capabilities. Key features include: +\begin{itemize} + \item Support for multiple image formats. + \item Efficient handling of image loading and pixel-level operations. + \item Availability of hardware acceleration features. +\end{itemize} +Libraries that provide a rich set of features and that have proven utility in industrial applications were deemed more favorable. + +\subsection{Performance and Resource Efficiency} +Preliminary benchmarks and documented performance metrics played an important role in the selection process. Libraries that demonstrated superior performance in prior studies—especially in terms of speed and low memory usage—were included in the evaluation. + +\subsection{Compatibility and Integration} +Ease of integration with existing systems is a critical consideration for industrial applications. The selected libraries must be compatible with modern development frameworks (such as .NET Core) and support a modular architecture that facilitates easy benchmarking and subsequent deployment. + +Based on these criteria, the libraries chosen for this benchmark include ImageSharp, Emgu CV, and Structure.Sketching, among others. Each library was subjected to the same set of tests under identical conditions to ensure a fair and objective comparison. + + + +% \section{Summary of Implementation} + +% The benchmark implementation described in this chapter is designed to provide a rigorous, reproducible, and fair comparison of image processing libraries based on two critical metrics: image loading time and pixel iteration time. By implementing a modular test harness in C\#, leveraging high-resolution timing mechanisms, and adopting a statistically rigorous approach to data collection, the benchmark delivers reliable performance data that can guide the selection of the most suitable library for industrial image processing applications. + +% Larger fragments of the source code that implement these tests can be found in the Appendix, ensuring that all details are available for verification and further analysis. + + + +This chapter has detailed the technical implementation of the benchmark, including the rationale behind the selected metrics, the code structure, the measurement methodology, and the criteria used for library selection. The comprehensive approach ensures that the performance comparisons presented later in this thesis are both robust and reliable, thereby providing a solid foundation for the evaluation of image processing libraries in industrial contexts. \ No newline at end of file -- GitLab