diff --git a/chapters/2-Methodology.tex b/chapters/2-Methodology.tex index 450ec45f51a99ca4eefddde06054e12756158bc0..83a36748824140625782745d8fc035081798a5e4 100644 --- a/chapters/2-Methodology.tex +++ b/chapters/2-Methodology.tex @@ -1,20 +1,20 @@ \chapter{Methodology} -This chapter outlines the journey and rationale behind the methodology for comparing various image processing libraries. It explains our choice of performance metrics, describes how the metrics were obtained and processed, and details the criteria used to select the libraries under investigation. The aim is to provide an approach that not only yields quantitative insights but also connects with real-world applications. +This chapter outlines the journey and rationale behind the methodology for comparing various image processing libraries. It explains the choice of performance metrics, describes how the metrics were obtained and processed and details the criteria used to select the libraries under investigation. The aim is to provide an approach that not only yields quantitative insights but also connects with real-world applications. \section{Selection of Libraries for Comparison} -The choice of libraries for this study was driven by several factors, including functionality, licensing, ease of integration, and performance potential. Most of image processing libraries provided wrappers or bindings for .NET, the language of choice for our experiments. The search of libraries revealed a wide range of options—from the commercial ImageSharp to various open-source alternatives such as OpenCvSharp, Emgu CV, SkiaSharp, Magick.NET, and others. +The choice of libraries for this study was driven by several factors, including functionality, licensing, ease of integration, and performance potential. Most of image processing libraries provided wrappers or bindings for .NET, the language of choice for this experiments. The search of libraries revealed a wide range of options—from the commercial ImageSharp to various open-source alternatives such as OpenCvSharp, Emgu CV, SkiaSharp, Magick.NET, and others. -With consideration of real-world image processing applications needs, certain technical features were considered essential for our evaluation, such as support for common image formats (JPEG, PNG, BMP, WebP, etc.), mutative operations (e.g., pixel manipulation, color space conversion), and high-level operations (e.g., image composition, filtering). all libraries were evaluated based on their ability to handle these tasks efficiently by inspecting their APIs and documentation. Also the licensing model, integration effort, and community support were considered to ensure that the selected libraries were not only technically capable but also practical for real-world applications. The data gathered from this is available including the table of feature comparison that was created for each library, and available in the appendix. +With consideration of real-world image processing applications needs, certain technical features were considered essential for the evaluation, such as support for common image formats (JPEG, PNG, BMP, WebP, etc.), mutative operations (e.g., pixel manipulation, color space conversion), and high-level operations (e.g., image composition, filtering). All libraries were evaluated based on their ability to handle these tasks efficiently by inspecting their APIs and documentation. Also the licensing model, integration effort, and community support were considered to ensure that the selected libraries were not only technically capable but also practical for real-world applications. The data gathered from this is available including the table of feature comparison that was created for each library, and available in the appendix (see Chapter ~\ref{appendix:evaluation-libraries}). \begin{center} - \includegraphics[width=25em]{media/Methodology - selection.png} - \captionof{figure}{Selected Libraries for Comparative Evaluation} + \includegraphics[width=33em]{media/Methodology - selection.png} + \captionof{figure}{This figure shows the selected libraries for comparison and their main features for each or combination of libraries.} \label{fig:selected-libraries} \end{center} -This research process provided a clear picture of the capabilities of each library and helped us identify the most suitable candidates for our performance tests. As a result, 5 suggested libraries or combinations of libraries were selected for the comparative evaluation: ImageSharp and Magick.NET as single library solutions, because they been considered to have capabilities to cover cover both lightweight and complex image processing tasks. And the combinations of OpenCvSharp with SkiaSharp, and Emgu CV with SkiaSharp, as they complement each other in terms of performance and functionality. +As a result of this research, a clear picture of each library's capabilities was developed, and the most suitable candidates for the performance tests were identified. Consequently, 5 suggested libraries or combinations of libraries were selected for the comparative evaluation: ImageSharp and Magick.NET as single library solutions, given their capabilities to cover both lightweight and complex image processing tasks, and the combinations of OpenCvSharp with SkiaSharp, and Emgu CV with SkiaSharp, as they complement each other in terms of performance and functionality. \section{Performance Metrics and Criteria for Comparison} @@ -22,12 +22,14 @@ Image processing is an integral part of many modern applications, from web servi The decision to focus on image conversion and pixel iteration was based on the need for metrics that could objectively and quantitatively measure core operations while remaining independent of higher-level library-specific features. Image conversion was chosen as it involves loading an image from disk, converting its format, and saving it back. This process mirrors common operations in web applications and desktop software where rapid image display is critical. -Pixel iteration, on the other hand, was selected to capture the efficiency of low-level image manipulation. Many image processing tasks, such as filtering, transformation, and color adjustment, require access to each pixel individually. By measuring the time taken to iterate over all pixels and apply a basic grayscale conversion, we obtained a clear indicator of the library’s capability in handling computationally intensive tasks. These metrics were chosen over alternatives like image saving speed or memory usage because they directly reflect two complementary dimensions: high-level operational overhead and low-level data processing efficiency. +Pixel iteration, on the other hand, was selected to capture the efficiency of low-level image manipulation. Many image processing tasks, such as filtering, transformation, and color adjustment, require access to each pixel individually. By measuring the time taken to iterate over all pixels and apply a basic grayscale conversion, a clear indicator of the library’s capability in handling computationally intensive tasks was obtained. These metrics were chosen over alternatives like image saving speed or memory usage because they directly reflect two complementary dimensions: high-level operational overhead and low-level data processing efficiency. + \begin{center} \includegraphics[width=23em]{media/Methodology - 2.1.png} \vspace{0em} - \captionof{figure}{Methodological Approach Overview} + \captionof{figure}{Metrics for performance comparison of image processing libraries, including image conversion and pixel iteration and what these metrics represent.} + \label{fig:performance-metrics} \label{fig:methodology-overview} \end{center} @@ -38,8 +40,8 @@ Measurement techniques evolved from initial prototypes and iterative refinements The image conversion test was designed as a JPEG image file loaded from disk, converted to PNG format, and then saved. The JPEG and PNG formats were chosen as examples of a common conversion scenario since JPEG is widely used for image storage and PNG is a lossless format suitable for web applications. The entire process is timed from start to finish. This approach involves several steps that are repeated over many iterations. Initially, 5 iterations as warm-ups are executed to allow the system to stabilize. The warm-up durations are recorded separately and then excluded from the main performance analysis. Once the system is in a steady state, a fixed number of 100 iterations is performed, and the time taken for each one is recorded. \begin{center} - \includegraphics[width=23em]{media/Methodology - 2.2.1.png} - \captionof{figure}{Image Conversion Measurement Process} + \includegraphics[width=33em]{media/Methodology - 2.2.1.png} + \captionof{figure}{Diagram of the Image Conversion Measurement Process, including the phases of warm-up and main iterations and data collection.} \label{fig:image-conversion-measurement} \end{center} @@ -50,8 +52,8 @@ The .NET \texttt{Stopwatch} class is used to record the elapsed time for each st The pixel iteration metric targets the efficiency of low-level pixel operations, which are foundational for many advanced image processing techniques. The focus here was on isolating the per-pixel operation, independent of any higher-level image processing abstractions. The measured time provided insights into how efficiently each library handles large amounts of pixel data, a critical factor when scaling to high-resolution images or real-time processing tasks. \begin{center} - \includegraphics[width=23em]{media/Methodology - 2.2.2.png} - \captionof{figure}{Pixel Iteration Measurement Process} + \includegraphics[width=33em]{media/Methodology - 2.2.2.png} + \captionof{figure}{Diagram of the Pixel Iteration Measurement Process, including the phases of warm-up and main iterations and data collection.} \label{fig:pixel-iteration-measurement} \end{center} @@ -60,33 +62,33 @@ In this test, the image is loaded into memory, and a nested loop iterates over e \subsection{Criteria for Library Comparison} -Our comparative evaluation was based on a set of well-defined criteria that reflect both technical performance and practical implementation considerations. The primary criteria were performance (as measured by our two key metrics), functionality (including support for a wide range of image processing tasks), ease of integration (the simplicity of adopting the library within a .NET environment), and licensing. In addition to performance, the integration of BenchmarkDotNet for memory profiling adds another layer to our analysis, allowing us to evaluate the trade-offs between speed and memory consumption. +This comparative evaluation was based on a set of well-defined criteria that reflect both technical performance and practical implementation considerations. The primary criteria were performance (as measured by the two key metrics), functionality (including support for a wide range of image processing tasks), ease of integration (the simplicity of adopting the library within a .NET environment), and licensing. In addition to performance, the integration of BenchmarkDotNet for memory profiling adds another layer to the analysis, allowing the evaluation of trade-offs between speed and memory consumption. -Functionality was assessed by mapping each library’s capabilities against a comprehensive feature set that included image loading, pixel manipulation, format conversion, and high-level operations such as image composition. Ease of integration was evaluated by considering the availability of wrappers or bindings, the clarity of documentation, and the level of community support. Licensing was scrutinized not only in terms of costs but also in terms of the freedoms and restrictions imposed by each license (e.g., Apache 2.0 and MIT licenses versus commercial licensing models).Tables of feature comparison that was created for each library, and available in the appendix. +Functionality was assessed by mapping each library’s capabilities against a comprehensive feature set that included image loading, pixel manipulation, format conversion, and high-level operations such as image composition. Ease of integration was evaluated by considering the availability of wrappers or bindings, the clarity of documentation, and the level of community support. Licensing was scrutinized not only in terms of costs but also in terms of the freedoms and restrictions imposed by each license (e.g., Apache 2.0 and MIT licenses versus commercial licensing models).Tables of feature comparison that was created for each library, and available in the appendix (see Chapter ~\ref{appendix:evaluation-libraries}). \begin{center} - \includegraphics[width=16em]{media/Methodology - criteria.png} - \captionof{figure}{Library Selection Criteria} + \includegraphics[width=19em]{media/Methodology - criteria.png} + \captionof{figure}{Graphical representation of the criteria used for library comparison, including performance, functionality, ease of integration, Community support, and licensing.} \label{fig:library-selection-criteria} \end{center} -Finally, our selection criteria for libraries are grounded in both technical and practical considerations, ensuring that our findings are relevant to a wide range of use cases—from small-scale applications to enterprise-level deployments. +Finally, selection of criteria for libraries are grounded in both technical and practical considerations, ensuring that findings are relevant to a wide range of use cases—from small-scale applications to enterprise-level deployments. \section{Experimental Setup and Environment} -The tests were conducted in a controlled environment to ensure reproducibility and accuracy. Insuring that the hardware setup and software environment were consistent across all experiments by using same machine to eliminate variability due to hardware differences. The software environment was configured with a timer, namely the \texttt{Stopwatch} class, which provided millisecond-level precision. And memory profiling was done using BenchmarkDotNet in separate tests. This allowed us to capture not only execution times but also memory allocations and garbage collection metrics, even though our primary focus remained on processing speed. +The tests were conducted in a controlled environment to ensure reproducibility and accuracy. Insuring that the hardware setup and software environment were consistent across all experiments by using same machine to eliminate variability due to hardware differences. The software environment was configured with a timer, namely the \texttt{Stopwatch} class, which provided millisecond-level precision. And memory profiling was done using BenchmarkDotNet in separate tests to capture not only execution times but also memory allocations and garbage collection metrics, even though the primary focus remained on processing speed. \section{Data Collection and Processing} -The collected data includes the total time taken for the warm-up phase, the average time per iteration during the main phase, and the cumulative time including warm-up. These metrics together provide a comprehensive view for both the image conversion and pixel iteration tests. The simplicity of this test allows for easy replication and clear comparisons among different libraries, which is essential when making performance-based decisions. Each iteration's timing data was recorded using the high-resolution \texttt{Stopwatch} class and stored in memory. Following the completion of each test, the raw data was exported to an Excel file using the EPPlus library. This allowes for statistical analysis later, such as calculating the mean, median, and standard deviation of the performance times. The Excel files also served as a repository for comparative charts and graphs, which will be used to visually represent our findings. +The collected data includes the total time taken for the warm-up phase, the average time per iteration during the main phase, and the cumulative time including warm-up. These metrics together provide a comprehensive view for both the image conversion and pixel iteration tests. The simplicity of this test allows for easy replication and clear comparisons among different libraries, which is essential when making performance-based decisions. Each iteration's timing data was recorded using the high-resolution \texttt{Stopwatch} class and stored in memory. Following the completion of each test, the raw data was exported to an Excel file using the EPPlus library. This allowes for statistical analysis later, such as calculating the mean, median, and standard deviation of the performance times. The Excel files also served as a repository for comparative charts and graphs, which will be used to visually represent the findings. \begin{center} \includegraphics[width=33em]{media/Methodology - data.png} - \captionof{figure}{Data Collection and Processing Workflow} + \captionof{figure}{Graphical representation of the data collection and processing steps, including the use of the \texttt{Stopwatch} class, EPPlus library, and Excel files.} \label{fig:data-collection-processing} \end{center} -Additionally for memory profiling, the BenchmarkDotNet library was used to measure memory consumption during the tests. BenchmarkDotNet provides detailed memory allocation and garbage collection reports in console output, which were captured and stored. Then after analyzing the data, the results were aggregated and visualized to provide another layer of insight into the libraries' performance characteristics. These visuals were then used to inform our conclusions regarding speed, memory efficiency, and overall suitability for various tasks. +Additionally for memory profiling, the BenchmarkDotNet library was used to measure memory consumption during the tests. BenchmarkDotNet provides detailed memory allocation and garbage collection reports in console output, which were captured and stored. Then after analyzing the data, the results were aggregated and visualized to provide another layer of insight into the libraries' performance characteristics. These visuals were then used to form the conclusions regarding speed, memory efficiency, and overall suitability for various tasks. \section{Conclusion} diff --git a/media/Methodology - criteria.png b/media/Methodology - criteria.png index 1e3184266bd67ef8c4a187ffaaa4a620ab5a09dc..fd63f9b658953d2914be59af828605e69237cf12 100644 Binary files a/media/Methodology - criteria.png and b/media/Methodology - criteria.png differ