From fbdf5cc712d2046139a93c40c58a6a77c915fdc1 Mon Sep 17 00:00:00 2001 From: "FAZELI SHAHROUDI Sepehr (INTERN)" <Sepehr.FAZELISHAHROUDI.intern@3ds.com> Date: Mon, 24 Mar 2025 23:37:02 +0100 Subject: [PATCH] final edit --- chapters/3-Implementation.tex | 2 +- chapters/4-Results.tex | 4 ++-- chapters/5-Discussion.tex | 10 +++++----- sections/Chapter-1-sections/Research-Questions.tex | 2 +- sections/Chapter-3-sections/Memory-Profiling.tex | 6 +++--- sections/Chapter-3-sections/Pixel-Iteration.tex | 2 +- sections/Chapter-3-sections/System-Architecture.tex | 7 +++---- .../Analysis_and_Interpretation_of_Results.tex | 4 ++-- .../Image_conversion_benchmark_results.tex | 4 ++-- .../Chapter-4-sections/Memory_benchmark_results.tex | 4 ++-- .../Pixel_iteration_benchmark_results.tex | 4 ++-- 11 files changed, 24 insertions(+), 25 deletions(-) diff --git a/chapters/3-Implementation.tex b/chapters/3-Implementation.tex index 1ae50bc..980f37b 100644 --- a/chapters/3-Implementation.tex +++ b/chapters/3-Implementation.tex @@ -2,7 +2,7 @@ This chapter details the implementation of a comprehensive benchmarking framework to evaluate several image processing libraries, including ImageSharp, OpenCvSharp paired with SkiaSharp, Emgu CV coupled with Structure.Sketching, and Magick.NET integrated with MagicScaler. The objective was to create an endâ€toâ€end system that not only measures execution times for common image operations but also provides insights into memory usage. -This has been sought to answer key questions regarding the efficiency of image conversion and pixel iteration operations—two fundamental tasks in image processing. The following sections describe the review process, architectural decisions, and technical implementations in the study. +This has been sought to answer key questions regarding the efficiency of image conversion and pixel iteration operations—two fundamental tasks in image processing. The following sections describe the review process, architectural decisions, and technical implementations in the study. The full implementation, including source code and benchmarking results, is available at Gitlab repository\footnote{\url{https://mygit.th-deg.de/sf07627/fazeli_shahroudi-sepehr-master-sthesis}}. \input{sections/Chapter-3-sections/System-Architecture.tex} diff --git a/chapters/4-Results.tex b/chapters/4-Results.tex index cded2ed..58467d6 100644 --- a/chapters/4-Results.tex +++ b/chapters/4-Results.tex @@ -1,12 +1,12 @@ \chapter{Results} -This chapter presents our findings from the benchmarking experiments conducted to evaluate the performance of alternative image processing libraries. The results include quantitative data on image conversion and pixel iteration times, as well as memory consumption for each library or combination tested. The data generated will be used to answer the research question and support the hypotheses formulated in the previous chapters. The benchmarking approach consisted of running two primary tests on each library: an image conversion test that measured the time taken to load, process, and save images, and a pixel iteration test that recorded the time required to process every pixel in an image for a grayscale conversion. These experiments were performed in a controlled environment, with warm-up iterations included to reduce the impact of initial overhead. Memory consumption was tracked alongside processing times using BenchmarkDotNet, thereby offering a complete picture of both speed and resource utilization.\\ +This chapter presents findings from the benchmarking experiments conducted to evaluate the performance of alternative image processing libraries. The results include quantitative data on image conversion and pixel iteration times, as well as memory consumption for each library or combination tested. The data generated will be used to answer the research question and support the hypotheses formulated in the previous chapters. The benchmarking approach consisted of running two primary tests on each library: an image conversion test that measured the time taken to load, process, and save images, and a pixel iteration test that recorded the time required to process every pixel in an image for a grayscale conversion. These experiments were performed in a controlled environment, with warm-up iterations included to reduce the impact of initial overhead. Memory consumption was tracked alongside processing times using BenchmarkDotNet, thereby offering a complete picture of both speed and resource utilization.\\ %%[PLACEHOLDER: a media summarizing benchmarking methodology] Before discussing the results in detail, it is important to review the benchmarking design. In this study, each library was tested under the same conditions: the same input image was used, a fixed number of warm-up iterations were performed to reduce the effects of just-in-time compilation and caching, and finally, 100 main iterations were executed to ensure reliable statistics. For the image conversion test, the time measured was the duration needed to load a JPEG image, convert it to PNG, and save it back to disk. In the pixel iteration test, the focus was on recording the time required to access and change each pixel for producing a grayscale version of the image. -Memory diagnostics were captured concurrently, with particular attention to allocated memory and garbage collection events. This dual approach ensured that our results were not solely focused on speed but also took into account the resource efficiency of each solution. +Memory diagnostics were captured concurrently, with particular attention to allocated memory and garbage collection events. This dual approach ensured that the results were not solely focused on speed but also took into account the resource efficiency of each solution. %%[PLACEHOLDER: a media Diagram of benchmarking process] or reference to it diff --git a/chapters/5-Discussion.tex b/chapters/5-Discussion.tex index f86226b..6a14a52 100644 --- a/chapters/5-Discussion.tex +++ b/chapters/5-Discussion.tex @@ -4,11 +4,11 @@ This chapter interprets the results obtained in the benchmarking experiments, pl \section{Interpreting the Results: Performance vs. Practicality} -The results obtained from our benchmarking study reveal a clear hierarchy of performance among the tested libraries. However, performance alone does not determine the best library for a given use case. The ideal choice depends on a variety of factors, including memory efficiency, ease of integration, licensing constraints, and the specific needs of the application. +The results obtained from the benchmarking study reveal a clear hierarchy of performance among the tested libraries. However, performance alone does not determine the best library for a given use case. The ideal choice depends on a variety of factors, including memory efficiency, ease of integration, licensing constraints, and the specific needs of the application. \subsection{Performance Trade-offs and Suitability for Real-World Applications} -From performance standpoint, OpenCvSharp + SkiaSharp and Emgu CV + Structure.Sketching outperform ImageSharp in both image conversion and pixel iteration tasks. However, ImageSharp showed better memory efficiency during pixel iteration, making it a viable option for applications with limited memory resources. SkiaSharp, with its lightweight architecture and cross-platform compatibility, demonstrated remarkable performance in image conversion tasks. It consistently outperformed ImageSharp while consuming significantly less memory. This makes SkiaSharp an ideal choice for applications requiring efficient format conversion without extensive manipulation of individual pixels. Emgu CV, despite its high memory usage, proved to be the fastest option for pixel iteration. This is unsurprising, given its reliance on OpenCV’s highly optimized C++ backend. However, its higher memory footprint may be a drawback for applications running on constrained systems. Magick.NET, on the other hand, didn't perform well in both image conversion and pixel iteration tasks. This suggests that while Magick.NET is a robust tool for high-quality image manipulation and format conversion, it may not be suitable for performance-critical applications requiring low-latency processing. in graph \ref{fig:image-conversion} and \ref{fig:pixel-iteration} the performance comparison of the libraries in image conversion and pixel iteration tasks respectively can be seen. +From performance standpoint, OpenCvSharp + SkiaSharp and Emgu CV + Structure.Sketching outperform ImageSharp in both image conversion and pixel iteration tasks. However, ImageSharp showed better memory efficiency during pixel iteration, making it a viable option for applications with limited memory resources. SkiaSharp, with its lightweight architecture and cross-platform compatibility, demonstrated remarkable performance in image conversion tasks. It consistently outperformed ImageSharp while consuming significantly less memory. This makes SkiaSharp an ideal choice for applications requiring efficient format conversion without extensive manipulation of individual pixels. Emgu CV, despite its high memory usage, proved to be the fastest option for pixel iteration. This is unsurprising, given its reliance on OpenCV’s highly optimized C++ backend. However, its higher memory footprint may be a drawback for applications running on constrained systems. Magick.NET, on the other hand, didn't perform well in both image conversion and pixel iteration tasks. This suggests that while Magick.NET is a robust tool for high-quality image manipulation and format conversion, it may not be suitable for performance-critical applications requiring low-latency processing. In graph \ref{fig:image-conversion} and \ref{fig:pixel-iteration} the performance comparison of the libraries in image conversion and pixel iteration tasks respectively can be seen. \subsection{The Impact of Licensing on Library Selection} @@ -40,7 +40,7 @@ Licensing can be a key consideration in selecting an image processing library. T \section{Strengths and Weaknesses of the Different Libraries} -ImageSharp’s biggest advantage is its simple API and pure .NET implementation. It is easy to integrate and requires minimal setup. However, our benchmarks show that it lags behind other libraries in performance. Its relatively high memory efficiency during pixel iteration is a plus, but for tasks requiring fast image conversion or pixel-level modifications, other options are preferable. the combination of OpenCvSharp and SkiaSharp offers a mix of high performance and moderate complexity.This combination provides the best balance between speed and memory efficiency. OpenCvSharp offers the power of OpenCV’s optimized image processing, while SkiaSharp enhances its rendering and format conversion capabilities. However, using these libraries effectively requires familiarity with both OpenCV and SkiaSharp APIs, making them less beginner-friendly than ImageSharp. Emgu CV’s performance in pixel iteration tasks is unmatched, making it ideal for applications involving real-time image analysis, such as AI-driven image recognition. However, its high memory consumption may pose a problem for resource-limited environments. Structure.Sketching complements Emgu CV by providing efficient image creation and drawing capabilities, making this combination well-suited for applications requiring both processing speed and graphical rendering. In contrast, Magick.NET excels in high-quality image manipulation and resampling but falls short in raw speed. The high processing times recorded for pixel iteration indicate that Magick.NET is best suited for batch processing or scenarios where quality takes precedence over execution time. And MagickScaler, provides advanced image scaling capabilities, making it a valuable tool for applications requiring precise image resizing and enhancement. +ImageSharp’s biggest advantage is its simple API and pure .NET implementation. It is easy to integrate and requires minimal setup. However, benchmarks show that it lags behind other libraries in performance. Its relatively high memory efficiency during pixel iteration is a plus, but for tasks requiring fast image conversion or pixel-level modifications, other options are preferable. The combination of OpenCvSharp and SkiaSharp offers a mix of high performance and moderate complexity. This combination provides the best balance between speed and memory efficiency. OpenCvSharp offers the power of OpenCV’s optimized image processing, while SkiaSharp enhances its rendering and format conversion capabilities. However, using these libraries effectively requires familiarity with both OpenCV and SkiaSharp APIs, making them less beginner-friendly than ImageSharp. Emgu CV’s performance in pixel iteration tasks is unmatched, making it ideal for applications involving real-time image analysis, such as AI-driven image recognition. However, its high memory consumption may pose a problem for resource-limited environments. Structure.Sketching complements Emgu CV by providing efficient image creation and drawing capabilities, making this combination well-suited for applications requiring both processing speed and graphical rendering. In contrast, Magick.NET excels in high-quality image manipulation and resampling but falls short in raw speed. The high processing times recorded for pixel iteration indicate that Magick.NET is best suited for batch processing or scenarios where quality takes precedence over execution time. And MagickScaler provides advanced image scaling capabilities, making it a valuable tool for applications requiring precise image resizing and enhancement. Overally There is no single library that is best for all use cases. The optimal choice depends on the application’s specific requirements. If ease of implementation and maintainability are priorities, ImageSharp remains a solid choice despite its performance drawbacks. For performance-intensive applications where raw speed is essential, OpenCvSharp+SkiaSharp or Emgu CV+Structure.Sketching are superior choices. @@ -57,13 +57,13 @@ Moreover, the balance between speed and memory efficiency is a recurring challen Future research could explore the following areas to further enhance the capabilities of image processing libraries: -\textbf{Expanding the Scope of Benchmarking:} While our study focused on image conversion and pixel iteration, real-world applications often require additional operations such as filtering, blending, and object detection. Future research could expand the benchmarking scope to include these tasks, providing a more comprehensive evaluation of each library’s capabilities. +\textbf{Expanding the Scope of Benchmarking:} While the study focused on image conversion and pixel iteration, real-world applications often require additional operations such as filtering, blending, and object detection. Future research could expand the benchmarking scope to include these tasks, providing a more comprehensive evaluation of each library’s capabilities. \textbf{Cross-Language Compatibility:} Many image processing libraries are available in multiple programming languages, such as Python, Java, and C++. Investigating the performance of these libraries across different languages could provide valuable insights into the impact of language-specific optimizations on computational efficiency. \textbf{Format-Specific Performance:} Different image formats have unique compression algorithms and color spaces, which can impact the performance of image processing libraries. Future research could investigate how each library performs with specific formats, such as TIFF, BMP, or PNG, to identify any format-specific optimizations or bottlenecks. -\textbf{GPU Acceleration and Parallel Processing:} One limitation of our study is that all benchmarks were conducted on a CPU. Many modern image processing tasks benefit from GPU acceleration, which libraries like OpenCV support. Investigating the performance of these libraries on GPU-accelerated hardware could yield valuable insights into their scalability and efficiency. +\textbf{GPU Acceleration and Parallel Processing:} One limitation of this study is that all benchmarks were conducted on a CPU. Many modern image processing tasks benefit from GPU acceleration, which libraries like OpenCV support. Investigating the performance of these libraries on GPU-accelerated hardware could yield valuable insights into their scalability and efficiency. \textbf{Cloud-Based Processing:} With the growing adoption of cloud computing, it would be beneficial to evaluate how these libraries perform in cloud-based environments such as AWS Lambda or Azure Functions. Factors such as cold start times, scalability, and integration with cloud-based storage solutions would be critical considerations for enterprise applications. diff --git a/sections/Chapter-1-sections/Research-Questions.tex b/sections/Chapter-1-sections/Research-Questions.tex index d30ef7a..2ecad9e 100644 --- a/sections/Chapter-1-sections/Research-Questions.tex +++ b/sections/Chapter-1-sections/Research-Questions.tex @@ -1,6 +1,6 @@ \section{ Research Questions and Investigative Focus} -In This section we examine the core questions that guided the research in this master thesis. Rather than adopting a traditional hypothesis-driven approach, the study focused on a systematic, empirical evaluation of image processing libraries. The investigation was centered on two main questions: +In this section, the core questions that guided the research in this master thesis are examined. Rather than adopting a traditional hypothesis-driven approach, the study focused on a systematic, empirical evaluation of image processing libraries. The investigation was centered on two main questions: \begin{enumerate} \item What is the performance of different libraries when executing a defined set of image processing tasks? diff --git a/sections/Chapter-3-sections/Memory-Profiling.tex b/sections/Chapter-3-sections/Memory-Profiling.tex index d5f2412..305af4e 100644 --- a/sections/Chapter-3-sections/Memory-Profiling.tex +++ b/sections/Chapter-3-sections/Memory-Profiling.tex @@ -1,6 +1,6 @@ \section{Memory Profiling and Performance Analysis} -In any high-performance image processing application, it is not enough to measure raw execution time; memory consumption is equally critical. This section describes the integration of memory profiling into the benchmarking framework to provide a comprehensive view of the performance characteristics of each library and complement the time-based measurements. Using BenchmarkDotNet—a powerful tool for .NET performance analysis—we captured detailed metrics on memory allocation and garbage collection behavior. This implementation allowed us to understand the trade-offs between processing speed and resource utilization. +In any high-performance image processing application, it is not enough to measure raw execution time; memory consumption is equally critical. This section describes the integration of memory profiling into the benchmarking framework to provide a comprehensive view of the performance characteristics of each library and complement the time-based measurements. Using BenchmarkDotNet—a powerful tool for .NET performance analysis—detailed metrics on memory allocation and garbage collection behavior were captured. This implementation allowed the trade-offs between processing speed and resource utilization to be better understood. The memory profiling is designed to evaluate not only the mean execution times but also the memory allocated during both image conversion and pixel iteration tasks. Using BenchmarkDotNet’s \texttt{[MemoryDiagnoser]}, \texttt{[Orderer]}, and \texttt{[RankColumn]} attributes, data on memory consumption, garbage collection events, and total allocated memory were collected for each benchmarked operation. The BenchmarkDotNet analyzer for each method by default is configured to automatically determine how many warmup and measurement iterations to run based on the workload, environment, and statistical requirements for accurate measurements. So there is no need to implement a fixed iteration count for each method manually. @@ -61,8 +61,8 @@ public void PixelIterationBenchmark() } \end{lstlisting} -The pixel iteration benchmark was implemented in a similar manner, with the same memory diagnostics attributes. The code snippet above demonstrates the pixel iteration benchmark for ImageSharp, where each pixel in the image is converted to grayscale. The memory diagnostics provided by BenchmarkDotNet allowed us to track the memory consumption and garbage collection events during the pixel iteration operation, providing valuable insights into the resource utilization of each library. +The pixel iteration benchmark was implemented in a similar manner with the same memory diagnostics attributes. The code snippet above demonstrates the pixel iteration benchmark for ImageSharp, where each pixel in the image is converted to grayscale. The memory diagnostics provided by BenchmarkDotNet enabled tracking of the memory consumption and garbage collection events during the pixel iteration operation, providing valuable insights into the resource utilization of each library. -This code exemplifies our approach to memory diagnostics. By annotating the benchmark class with \texttt{[MemoryDiagnoser]}, BenchmarkDotNet automatically collects data on memory usage—including the number of garbage collection (GC) events and the total allocated memory during each benchmarked operation. Similar implimentations were done for other libraries as well. +This code exemplifies the approach to memory diagnostics. By annotating the benchmark class with \texttt{[MemoryDiagnoser]}, BenchmarkDotNet automatically collects data on memory usage—including the number of garbage collection (GC) events and the total allocated memory during each benchmarked operation. Similar implementations were done for other libraries as well. This level of granularity provided insights that went beyond raw timing metrics, revealing, for example, that while Emgu CV might be faster in certain operations, its higher memory consumption could be a concern for applications running on memory-constrained systems. \ No newline at end of file diff --git a/sections/Chapter-3-sections/Pixel-Iteration.tex b/sections/Chapter-3-sections/Pixel-Iteration.tex index 47a6155..db07947 100644 --- a/sections/Chapter-3-sections/Pixel-Iteration.tex +++ b/sections/Chapter-3-sections/Pixel-Iteration.tex @@ -1,6 +1,6 @@ \subsection{Pixel Iteration Benchmark Implementation} -The pixel iteration benchmark measures the time taken to perform a basic image processing operation—converting an image to grayscale by iterating over each pixel. While modern image processing often employs vectorized operations on entire matrices for efficiency, pixel-by-pixel iteration remains relevant in several scenarios: when implementing custom filters with complex logic, when working with specialized pixel formats, or when memory constraints limit bulk operations. Additionally, this benchmark provides insight into the underlying performance characteristics of image libraries even if vectorized alternatives would be preferred in production environments. By examining the performance of this fundamental operation, we can better understand the efficiency trade-offs in various image processing contexts. +The pixel iteration benchmark measures the time taken to perform a basic image processing operation—converting an image to grayscale by iterating over each pixel. While modern image processing often employs vectorized operations on entire matrices for efficiency, pixel-by-pixel iteration remains relevant in several scenarios: when implementing custom filters with complex logic, when working with specialized pixel formats, or when memory constraints limit bulk operations. Additionally, this benchmark provides insight into the underlying performance characteristics of image libraries even if vectorized alternatives would be preferred in production environments. By examining the performance of this fundamental operation, a better understanding of the efficiency trade-offs in various image processing contexts is achieved. For ImageSharp, the implementation involves loading the image as an array of pixels, processing each pixel to compute its grayscale value, and then updating the image accordingly. The following snippet provides a glimpse into this process: diff --git a/sections/Chapter-3-sections/System-Architecture.tex b/sections/Chapter-3-sections/System-Architecture.tex index da12af1..87c815e 100644 --- a/sections/Chapter-3-sections/System-Architecture.tex +++ b/sections/Chapter-3-sections/System-Architecture.tex @@ -1,8 +1,7 @@ \section{System Architecture and Design Rationale} +The design of the benchmarking framework was guided by the need for consistency, repeatability, and scientific severity. The system was architected to support multiple libraries through a common interface, ensuring that each library’s performance could be measured under identical conditions. At the core of the design was a twoâ€phase benchmarking process: an initial warm-up phase to account for any initialization overhead, followed by a main test phase where the actual performance metrics were recorded. -The design of our benchmarking framework was guided by the need for consistency, repeatability, and scientific severity. The system was architected to support multiple libraries through a common interface, ensuring that each library’s performance could be measured under identical conditions. At the core of our design was a twoâ€phase benchmarking process: an initial warm-up phase to account for any initialization overhead, followed by a main test phase where the actual performance metrics were recorded. - -In constructing the system, several important decisions were made. First, we employed a modular approach, separating the benchmarking routines into distinct components. This allowed us to encapsulate the logic for image conversion and pixel iteration into separate classes, each responsible for executing a series of timed iterations and logging the results. +In constructing the system, several important decisions were made. First, a modular approach was employed, separating the benchmarking routines into distinct components. This allowed the logic for image conversion and pixel iteration to be encapsulated into separate classes, each responsible for executing a series of timed iterations and logging the results. \begin{lstlisting}[language={[Sharp]C}, caption={Design of the benchmarking framework}] public class ImageConversionBenchmark{ @@ -26,7 +25,7 @@ The architecture also included a dedicated component for result aggregation, whi } \end{lstlisting} -An essential aspect of the design was the uniformity of testing. Despite the differences in methods of implementation among the libraries, the benchmarking framework was designed to abstract away these differences. Each library was integrated by implementing the same sequence of operations: reading an image from disk, processing the image (either converting its format or iterating over its pixels to apply a grayscale filter), and finally saving the processed image back to disk. This uniform methodology ensured that our performance comparisons were both fair and reproducible. +An essential aspect of the design was the uniformity of testing. Despite the differences in methods of implementation among the libraries, the benchmarking framework was designed to abstract away these differences. Each library was integrated by implementing the same sequence of operations: reading an image from disk, processing the image (either converting its format or iterating over its pixels to apply a grayscale filter), and finally saving the processed image back to disk. This uniform methodology ensured that the performance comparisons were both fair and reproducible. The architecture also accounted for system-level factors such as memory management and garbage collection. For instance, in languages like C\#, where unmanaged resources must be explicitly disposed of, the design included rigorous cleanup routines to ensure that each iteration began with a clean slate. This attention to detail was crucial in obtaining accurate measurements, as any residual state from previous iterations could skew the results. diff --git a/sections/Chapter-4-sections/Analysis_and_Interpretation_of_Results.tex b/sections/Chapter-4-sections/Analysis_and_Interpretation_of_Results.tex index b97725b..2f02362 100644 --- a/sections/Chapter-4-sections/Analysis_and_Interpretation_of_Results.tex +++ b/sections/Chapter-4-sections/Analysis_and_Interpretation_of_Results.tex @@ -1,6 +1,6 @@ \section{Analysis and Interpretation of Results} -As the final benchmarking results were collected and plotted, the emerging trends provided critical insights into the efficiency of various image processing libraries. The raw numerical data from our benchmarking suite provided an answer to the research question, but a deeper interpretation of these results allowed us to refine our understanding of the trade-offs and strengths of each alternative. This section explores the relationship between speed and memory usage, compares the empirical findings with theoretical expectations, and discusses the implications for real-world applications. +As the final benchmarking results were collected and plotted, the emerging trends provided critical insights into the efficiency of various image processing libraries. The raw numerical data from the benchmarking suite provided an answer to the research question, but a deeper interpretation of these results allowed refinement of the understanding of the trade-offs and strengths of each alternative. This section explores the relationship between speed and memory usage, compares the empirical findings with theoretical expectations, and discusses the implications for real-world applications. \subsection{Comparison of Performance Trends} @@ -14,7 +14,7 @@ The trends in memory consumption were particularly revealing. In the image conve \subsection{Trade-Offs Between Speed and Memory Usage} -The relationship between speed and memory consumption is a recurring theme in performance optimization. Our results underscore that achieving optimal speed often comes at the cost of increased memory usage. Emgu CV+Structure.Sketching exemplifies this trade-off: while its pixel iteration speed was among the best recorded, it consumed significantly more RAM than ImageSharp. +The relationship between speed and memory consumption is a recurring theme in performance optimization. Results underscore that achieving optimal speed often comes at the cost of increased memory usage. Emgu CV+Structure.Sketching exemplifies this trade-off: while its pixel iteration speed was among the best recorded, it consumed significantly more RAM than ImageSharp. The implications of these trade-offs depend heavily on the intended application. For environments where processing speed is paramount—such as real-time video processing or AI-powered image enhancement—Emgu CV’s increased memory footprint may be an acceptable compromise. However, in resource-constrained applications (e.g., embedded systems, mobile devices, or cloud-based deployments with strict memory limits), a lower-memory alternative like ImageSharp may be more suitable despite its lower speed. diff --git a/sections/Chapter-4-sections/Image_conversion_benchmark_results.tex b/sections/Chapter-4-sections/Image_conversion_benchmark_results.tex index d30092c..e568446 100644 --- a/sections/Chapter-4-sections/Image_conversion_benchmark_results.tex +++ b/sections/Chapter-4-sections/Image_conversion_benchmark_results.tex @@ -1,6 +1,6 @@ \section{Image Conversion Benchmark Results} -The image conversion benchmark was performed using ImageSharp and Magick.NET as well as SkiaSharp and Structure.Sketching which were the chosen libraries in their combinations with OpenCvSharp and Emgu CV, respectively for the conversion task. Using the same 4k resolution image, the benchmark measured the time taken to convert the image from JPEG to PNG format. Comparing the results of these libraries provides insights into their performance and efficiency in application scenarios where rapid image conversion is required—such as real-time image processing pipelines or high-volume batch processing environments. The data thus answer one of our central question to which library can provide significantly faster image conversion, thereby supporting the hypothesis discussed in earlier chapters. +The image conversion benchmark was performed using ImageSharp and Magick.NET as well as SkiaSharp and Structure.Sketching which were the chosen libraries in their combinations with OpenCvSharp and Emgu CV, respectively, for the conversion task. Using the same 4k resolution image, the benchmark measured the time taken to convert the image from JPEG to PNG format. Comparing the results of these libraries provides insights into their performance and efficiency in application scenarios where rapid image conversion is required—such as real-time image processing pipelines or high-volume batch processing environments. The data thus answer one of the central questions regarding which library can provide significantly faster image conversion, thereby supporting the hypothesis discussed in earlier chapters. ImageSharp recorded an average conversion time of approximately 2,754 milliseconds. In contrast, the combination of OpenCvSharp with SkiaSharp delivered an average conversion time of only 539 milliseconds. Similarly, Emgu CV integrated with Structure.Sketching achieved an average time of 490 milliseconds, while Magick.NET registered an average conversion time of 4,333 milliseconds. @@ -37,7 +37,7 @@ To visually summarize these findings, Figure \ref{fig:image-conversion} presents \begin{center} \includegraphics[width=5in]{media/log_1.png} - \captionof{figure}{Bar chart showing the Image Conversion Benchmark Results in milliseconds, with a logarithmic scale to highlight the differences in total times. x-axis represents the libraries or combinations, while y-axis shows the time in milliseconds.} + \captionof{figure}{Bar chart showing the Image Conversion Benchmark Results in milliseconds, with a logarithmic scale to highlight the differences in total times. X-axis represents the libraries or combinations, while Y-axis shows the time in milliseconds.} \label{fig:image-conversion} \vspace{0.5cm} \end{center} diff --git a/sections/Chapter-4-sections/Memory_benchmark_results.tex b/sections/Chapter-4-sections/Memory_benchmark_results.tex index 1b62ea6..3cebf04 100644 --- a/sections/Chapter-4-sections/Memory_benchmark_results.tex +++ b/sections/Chapter-4-sections/Memory_benchmark_results.tex @@ -1,6 +1,6 @@ \section{Memory Benchmarking Results} -In parallel with the time benchmarks, memory consumption was a critical parameter in our evaluation. For the image conversion tasks, SkiaSharp, as part of the OpenCvSharp+SkiaSharp configuration, exhibited the lowest memory allocation, with values approximating 58 KB. ImageSharp, in comparison, required about 5.67 MB, which is substantially higher. In the context of pixel iteration, the memory profiles were similarly divergent. ImageSharp was extremely efficient in this regard, consuming roughly 20 KB on average, whereas Emgu CV + Structure.Sketching, that performed exceptionally well in terms of speed for pixel iteration, in memory terms, was less efficient. It consumed around 170 MB of memory, which is significantly higher than the other libraries tested. SkiaSharp, +In parallel with the time benchmarks, memory consumption was a critical parameter in the evaluation. For the image conversion tasks, SkiaSharp, as part of the OpenCvSharp+SkiaSharp configuration, exhibited the lowest memory allocation, with values approximating 58 KB. ImageSharp, in comparison, required about 5.67 MB, which is substantially higher. In the context of pixel iteration, the memory profiles were similarly divergent. ImageSharp was extremely efficient in this regard, consuming roughly 20 KB on average, whereas Emgu CV + Structure.Sketching, that performed exceptionally well in terms of speed for pixel iteration, in memory terms, was less efficient. It consumed around 170 MB of memory, which is significantly higher than the other libraries tested. SkiaSharp, \setlength{\columnWidth}{0.22\textwidth} \begin{longtable}{|>{\raggedright\arraybackslash}p{0.20\textwidth}|>{\raggedright\arraybackslash}p{\columnWidth}|>{\raggedright\arraybackslash}p{\columnWidth}|>{\raggedright\arraybackslash}p{\columnWidth}|>{\raggedright\arraybackslash}p{\columnWidth}|} @@ -58,4 +58,4 @@ The large memory footprint of Emgu CV during pixel iteration is a noteworthy tra \label{tab:memory-results-pixel-iteration} \end{longtable} -The table \ref{tab:memory-results-pixel-iteration} indicates that while SkiaSharp has the highest memory allocation for pixel iteration of approximately 384 MB, ImageSharp is the most memory-efficient, with a memory allocation of 0.01932 MB. Emgu CV falls in between, with a memory allocation of 170 MB. These figures provide a clear indication of the memory efficiency of each library for pixel iteration tasks. Garbage collection counts are also included to provide additional context on the memory management behavior of each library. Gen0, Gen1, and Gen2 collections means the number of times each generation was collected during the benchmarking process. this means that the garbage collector had to run 33,142 times for Gen0, 1,571 times for Gen1, and 1,571 times for Gen2. \ No newline at end of file +The table \ref{tab:memory-results-pixel-iteration} indicates that while SkiaSharp has the highest memory allocation for pixel iteration of approximately 384 MB, ImageSharp is the most memory-efficient, with a memory allocation of 0.01932 MB. Emgu CV falls in between, with a memory allocation of 170 MB. These figures provide a clear indication of the memory efficiency of each library for pixel iteration tasks. Garbage collection counts are also included to provide additional context on the memory management behavior of each library. Gen0, Gen1, and Gen2 collections means the number of times each generation was collected during the benchmarking process. This means that the garbage collector had to run 33,142 times for Gen0, 1,571 times for Gen1, and 1,571 times for Gen2. \ No newline at end of file diff --git a/sections/Chapter-4-sections/Pixel_iteration_benchmark_results.tex b/sections/Chapter-4-sections/Pixel_iteration_benchmark_results.tex index 023b869..3035150 100644 --- a/sections/Chapter-4-sections/Pixel_iteration_benchmark_results.tex +++ b/sections/Chapter-4-sections/Pixel_iteration_benchmark_results.tex @@ -26,12 +26,12 @@ On the other hand, the pixel iteration benchmark aimed to assess the libraries \end{longtable} \renewcommand{\arraystretch}{1.0} -The performance landscape changed when we observed the results for Magick.NET. This configuration recorded a warm-up time of approximately 12,149 milliseconds, and the main iterations averaged 2,054.18 milliseconds, resulting in an astronomical total of 217,567 milliseconds. As discussed earlier, OpenCvSharp and Emgu CV were the chosen libraries in their combinations with SkiaSharp and Structure.Sketching, respectively for the pixel iteration task. The results of these tests provide insights into the performance of these libraries in scenarios where pixel-level operations are required, such as image processing algorithms or computer vision applications. The performance landscape changed when we observed the results for OpenCvSharp. This configuration recorded a warm-up time of approximately 813 milliseconds, and the main iterations averaged 159.44 milliseconds, resulting in a total of 16,757 milliseconds. In contrast, Emgu CV delivered impressive results with a warm-up time of 1,118 milliseconds and an average main iteration time of 118.87 milliseconds, culminating in a total of 13,005 milliseconds. +The performance landscape changed upon examining the results for Magick.NET. This configuration recorded a warm-up time of approximately 12,149 milliseconds, and the main iterations averaged 2,054.18 milliseconds, resulting in an astronomical total of 217,567 milliseconds. As discussed earlier, OpenCvSharp and Emgu CV were chosen in combinations with SkiaSharp and Structure.Sketching, respectively, for the pixel iteration task. The results of these tests provide insights into the performance of these libraries in scenarios where pixel-level operations are required, such as image processing algorithms or computer vision applications. The performance landscape also shifted upon examining the results for OpenCvSharp. This configuration recorded a warm-up time of approximately 813 milliseconds, and the main iterations averaged 159.44 milliseconds, resulting in a total of 16,757 milliseconds. In contrast, Emgu CV delivered impressive results with a warm-up time of 1,118 milliseconds and an average main iteration time of 118.87 milliseconds, culminating in a total of 13,005 milliseconds. The table \ref{tab:pixel-iteration} summarizes the pixel iteration benchmark results, highlighting the warm-up and average times for each library combination. The data clearly show that Emgu CV is the most efficient library for pixel iteration, with the lowest average time of 118.87 milliseconds. ImageSharp and OpenCvSharp follow closely behind, with average times of 117.06 and 159.44 milliseconds, respectively. In contrast, Magick.NET is significantly slower, with an average time of 2,054.18 milliseconds.Graphical \ref{fig:pixel-iteration} depictions further highlight these performance differences. \includegraphics[width=5in]{media/log_2.png} -\captionof{figure}{Bar chart showing the Pixel Iteration Benchmark Results in milliseconds, with a logarithmic scale to highlight the differences in total times. x-axis represents the libraries or combinations, while y-axis shows the time in milliseconds.} +\captionof{figure}{Bar chart showing the Pixel Iteration Benchmark Results in milliseconds, with a logarithmic scale to highlight the differences in total times. X-axis represents the libraries or combinations, while Y-axis shows the time in milliseconds.} \label{fig:pixel-iteration} \vspace{1em} -- GitLab