Close Menu
    Trending
    • Discover How AI Can Transform the Way You Work With This $20 E-Degree
    • When Your Probabilities Lie — A Hands-On Guide to Probability Calibration | by Anirban Mukherjee | Jun, 2025
    • Why Qualitative Feedback Is the Most Valuable Metric You’re Not Tracking
    • ChatGPT-4.5: OpenAI’s Most Powerful AI Model Yet! | by Sourabh Joshi | Jun, 2025
    • Building Wealth While Building a Business: 10 Financial Habits That Pay Off Long-Term
    • Army Dog Center Pakistan 03457512069 | by Army Dog Center Pakistan 03008751871 | Jun, 2025
    • How Banking App Chime Went From Broke to IPO Billions
    • Technologies. Photo by Markus Spiske on Unsplash | by Abhinav Shrivastav | Jun, 2025
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»Modern GUI Applications for Computer Vision in Python
    Artificial Intelligence

    Modern GUI Applications for Computer Vision in Python

    FinanceStarGateBy FinanceStarGateMay 1, 2025No Comments16 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I’m an enormous fan of interactive visualizations. As a pc imaginative and prescient engineer, I deal nearly each day with picture processing associated duties and as a rule I’m iterating on an issue the place I want visible suggestions to make choices. Let’s consider a quite simple picture processing pipeline with a single step that has some parameters to remodel a picture:

    Sample processing pipeline with missing visualization of output

    How are you aware which parameters to regulate? Does the pipeline even work as anticipated? With out visualizing your output, you would possibly miss out on some key insights and make sub optimum decisions.

    Generally merely displaying the output picture and/or some calculated metrics might be sufficient to iterate on the parameters. However I’ve discovered myself in lots of conditions the place a instrument could be immensely useful to iterate shortly and interactively on my pipeline. So on this article I’ll present you how you can work with easy built-in interactive parts from OpenCV in addition to how you can construct extra fashionable person interfaces for Pc Imaginative and prescient initiatives utilizing customtkinter.

    Stipulations

    If you wish to observe alongside, I like to recommend you to arrange your native atmosphere with uv and set up the next packages:

    uv add numpy opencv-Python pillow customtkinter

    Purpose

    Earlier than we dive into the code of the undertaking, let’s shortly define what we need to construct. The applying ought to use the webcam feed and permit the person to pick out various kinds of filters that might be utilized to the stream. The processed picture must be proven in real-time within the window. A tough sketch of a possible UI would look as follows:

    OpenCV – GUI

    Let’s begin with a easy loop that fetches frames out of your webcam and shows them in an OpenCV window.

    import cv2
    
    cap = cv2.VideoCapture(0)
    
    whereas True:
        ret, body = cap.learn()
        if not ret:
            break
    
        cv2.imshow("Video Feed", body)
        
        key = cv2.waitKey(1) & 0xFF
        if key == ord('q'):
            break
    
    cap.launch()
    cv2.destroyAllWindows()

    Keyboard Enter

    The only approach so as to add interactivity right here, is by including keyboard inputs. For instance, we are able to cycle by means of totally different filters with the quantity keys.

    ...
    
    filter_type = "regular"
    
    whereas True:
        ...
    
        if filter_type == "grayscale":
            body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
        elif filter_type == "regular":
            cross
    
        ...
    
        if key == ord('1'):
            filter_type = "regular"
        if key == ord('2'):
            filter_type = "grayscale"
            
        ...

    Now you possibly can swap between the traditional picture and the grayscale model by urgent the quantity keys 1 and a pair of. Let’s additionally shortly add a caption to the picture so we are able to truly see the identify of the filter we’re making use of.

    Now we must be cautious right here: should you check out the form of the body after the filter, you’ll discover that the dimensionality of the body array has modified. Do not forget that OpenCV picture arrays are ordered HWC (peak, width, coloration) with coloration as BGR (inexperienced, blue, purple), so the 640×480 picture from my webcam has form (480, 640, 3).

    print(filter_type, body.form)
    # regular (480, 640, 3)
    # grayscale (480, 640)

    Now as a result of the grayscale operation outputs a single channel picture, the colour dimension is dropped. If we now need to draw on prime of this picture, we both must specify a single channel coloration for the grayscale picture or we convert that picture again to the unique BGR format. The second possibility is a bit cleaner as a result of we are able to unify the annotation of the picture.

    if filter_type == "grayscale":
        body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
    elif filter_type == "regular":
        cross
    
    if len(body.form) == 2: # Convert grayscale to BGR
        body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)

    Caption

    I need to add a black border on the backside of the picture, on prime of which the identify of the filter might be proven. We are able to make use of the copyMakeBorder operate to pad the picture with a border coloration on the backside. Then we are able to add the textual content on prime of this border.

    # Add a black border on the backside of the body
    border_height = 50
    border_color = (0, 0, 0)
    body = cv2.copyMakeBorder(body, 0, border_height, 0, 0, cv2.BORDER_CONSTANT, worth=border_color)
    
    # Present the filter identify
    cv2.putText(
        body,
        filter_type,
        (body.form[1] // 2 - 50, body.form[0] - border_height // 2 + 10),
        cv2.FONT_HERSHEY_SIMPLEX,
        1,
        (255, 255, 255),
        2,
        cv2.LINE_AA,
    )

    That is how the output ought to look, and you’ll swap between the traditional and grayscale mode and the frames might be captioned accordingly.

    Sliders

    Now as an alternative of utilizing the keyboard as enter technique, OpenCV gives a fundamental trackbar slider UI aspect. The trackbar must be initialized originally of the script. We have to reference the identical window as we might be displaying our pictures in later, so I’ll create a variable for the identify of the window. Utilizing this identify, we are able to create the trackbar and let or not it’s a selector for the index within the record of filters.

    filter_types = ["normal", "grayscale"]
    
    win_name = "Webcam Stream"
    cv2.namedWindow(win_name)
    
    tb_filter = "Filter"
    # def createTrackbar(trackbarName: str, windowName: str, worth: int, depend: int, onChange: _typing.Callable[[int], None]) -> None: ...
    cv2.createTrackbar(
        tb_filter,
        win_name,
        0,
        len(filter_types) - 1,
        lambda _: None,
    )

    Discover how we use an empty lambda for the onChange callback, we’ll fetch the worth manually within the loop. Every part else will keep the identical.

    whereas True:
        ...
    
        # Get the chosen filter sort
        filter_id = cv2.getTrackbarPos(tb_filter, win_name)
        filter_type = filter_types[filter_id]
    
        ...

    And voilà, we now have a trackbar to pick out our filter.

    Now we are able to additionally simply add extra filters simply by extending our record and implementing every processing step.

    filter_types = [
        "normal",
        "grayscale",
        "blur",
        "threshold",
        "canny",
        "sobel",
        "laplacian",
    ]
    
    ...
    
        if filter_type == "grayscale":
            body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
        elif filter_type == "blur":
            body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
        elif filter_type == "threshold":
            grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
            _, thresholded_frame = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
        elif filter_type == "canny":
            body = cv2.Canny(body, threshold1=100, threshold2=200)
        elif filter_type == "sobel":
            body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
        elif filter_type == "laplacian":
            body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
        elif filter_type == "regular":
            cross
    
        if body.dtype != np.uint8:
            # Scale the body to uint8 if crucial
            cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
            body = body.astype(np.uint8)
    

    Fashionable GUI with CustomTkinter

    Now I don’t find out about you however the present person interface doesn’t look very fashionable to me. Don’t get me fallacious, there may be some magnificence within the model of the interface, however I desire cleaner, extra fashionable designs. Plus we’re already on the restrict of what OpenCV gives out of the field when it comes to UI parts. Yep, no buttons, textual content fields, dropdowns, checkboxes or radio buttons and no customized layouts. So let’s see how we are able to remodel the look and person expertise of this fundamental utility to a contemporary and clear one.

    So to get began, we first must create a category for our app. We create two frames: the primary one accommodates our filter choice on the left facet and the second wraps the picture show. For now, let’s begin with a easy placeholder textual content. Sadly there’s no out of the field opencv part from customtkinter immediately, so we might want to shortly construct our personal within the subsequent few steps. However let’s first end the fundamental UI structure.

    import customtkinter
    
    
    class App(customtkinter.CTk):
        def __init__(self) -> None:
            tremendous().__init__()
    
            self.title("Webcam Stream")
            self.geometry("800x600")
    
            self.filter_var = customtkinter.IntVar(worth=0)
    
            # Body for filters
            self.filters_frame = customtkinter.CTkFrame(self)
            self.filters_frame.pack(facet="left", fill="each", develop=False, padx=10, pady=10)
    
            # Body for picture show
            self.image_frame = customtkinter.CTkFrame(self)
            self.image_frame.pack(facet="proper", fill="each", develop=True, padx=10, pady=10)
    
            self.image_display = customtkinter.CTkLabel(self.image_frame, textual content="Loading...")
            self.image_display.pack(fill="each", develop=True, padx=10, pady=10)
    
    app = App()
    app.mainloop()

    Filter Radio Buttons

    Now that the skeleton is constructed, we are able to begin filling in our elements. For the left facet, I might be utilizing the identical record of filter_types to populate a bunch of radio buttons to pick out the filter.

            # Create radio buttons for every filter sort
            self.filter_var = customtkinter.IntVar(worth=0)
            for filter_id, filter_type in enumerate(filter_types):
                rb_filter = customtkinter.CTkRadioButton(
                    self.filters_frame,
                    textual content=filter_type.capitalize(),
                    variable=self.filter_var,
                    worth=filter_id,
                )
                rb_filter.pack(padx=10, pady=10)
    
                if filter_id == 0:
                    rb_filter.choose()

    Picture Show Element

    Now we are able to get began on the attention-grabbing half, how you can get our OpenCV frames to point out up within the picture part. As a result of there’s no built-in part, let’s create our personal based mostly on the CTKLabel. This enables us to show a loading textual content whereas the webcam stream is beginning up.

    ...
    
    class CTkImageDisplay(customtkinter.CTkLabel):
        """
        A reusable ctk widget widget to show opencv pictures.
        """
    
        def __init__(
            self,
            grasp: Any,
        ) -> None:
            self._textvariable = customtkinter.StringVar(grasp, "Loading...")
            tremendous().__init__(
                grasp,
                textvariable=self._textvariable,
                picture=None,
            )
    
    ...
    
    class App(customtkinter.CTk):
        def __init__(self) -> None:
            ...
    
            self.image_display = CTkImageDisplay(self.image_frame)
            self.image_display.pack(fill="each", develop=True, padx=10, pady=10) 

    Thus far nothing has modified, we merely swapped out the present label with our customized class implementation. In our CTKImageDisplay class we are able to outline an operate to point out a picture within the part, let’s name it set_frame.

    import cv2
    import numpy.typing as npt
    from PIL import Picture
    
    class CTkImageDisplay(customtkinter.CTkLabel):
        ...
    
        def set_frame(self, body: npt.NDArray) -> None:
            """
            Set the body to be displayed within the widget.
    
            Args:
                body: The brand new body to show, in opencv format (BGR).
            """
            target_width, target_height = body.form[1], body.form[0]
    
            # Convert the body to PIL Picture format
            frame_rgb = cv2.cvtColor(body, cv2.COLOR_BGR2RGB)
            frame_pil = Picture.fromarray(frame_rgb, "RGB")
    
            ctk_image = customtkinter.CTkImage(
                light_image=frame_pil,
                dark_image=frame_pil,
                measurement=(target_width, target_height),
            )
            self.configure(picture=ctk_image, textual content="")
            self._textvariable.set("")

    Let’s digest this. First we have to understand how huge our picture part might be, we are able to extract that data from the form property of our picture array. To show the picture in tkinter, we’d like a Pillow Picture sort, we can not immediately use the OpenCV array. To transform an OpenCV array to Pillow, we first must convert the colour house from BGR to RGB after which we are able to use the Picture.fromarray operate to create the Pillow Picture object. Subsequent we are able to create a CTKImage, the place we use the identical picture irrespective of the theme and set the scale based on our body. Lastly we are able to use the configure technique to set the picture in our body. On the finish, we additionally reset the textual content variable to take away the “Loading…” textual content, regardless that it could theoretically be hidden behind the picture.

    To shortly take a look at this, we are able to set the primary picture of our webcam within the constructor. (We’ll see in a second why this isn’t such a good suggestion)

    class App(customtkinter.CTk):
        def __init__(self) -> None:
            ...
            
            cap = cv2.VideoCapture(0)
            _, frame0 = cap.learn()
            self.image_display.set_frame(frame0)

    For those who run this, you’ll discover that the window takes a bit longer to pop up, however after a brief delay you need to see a static picture out of your webcam.

    NOTE: For those who don’t have a webcam prepared you too can simply use a neighborhood video file by passing the file path to the cv2.VideoCapture constructor name.

    Now this isn’t very thrilling, because the body doesn’t replace but. So let’s see what occurs if we strive to do that naively.

    class App(customtkinter.CTk):
        def __init__(self) -> None:
            ...
    
            cap = cv2.VideoCapture(0)
            whereas True:
                ret, body = cap.learn()
                if not ret:
                    break
    
                self.image_display.set_frame(body)

    Virtually the identical as earlier than, besides now we run the body loop as we did within the earlier chapter with the OpenCV GUI. For those who run this, you will notice… precisely nothing. The window by no means exhibits up, since we’re creating an infinite loop within the constructor of the app! That is additionally the explanation why this system solely confirmed up after a delay within the earlier instance, the opening of the Webcam stream is a blocking operation, and the occasion loop for the window can not run, so it doesn’t present up but.

    So let’s repair this by including a barely higher implementation that enables the gui occasion loop to run whereas we additionally replace the body each on occasion. We are able to use the after technique of tkinter to schedule a operate name whereas yielding the method through the wait time.

    
            ...
    
            self.cap = cv2.VideoCapture(0)
            self.after(10, self.update_frame)
    
        def update_frame(self) -> None:
            """
            Replace the displayed body.
            """
            
            ret, body = self.cap.learn()
            if not ret:
                return
            
            self.image_display.set_frame(body)
    
            self.after(10, self.update_frame)
    

    So now we nonetheless arrange the webcam stream within the constructor, so we haven’t solved that downside but. However no less than we are able to see a steady stream of frames in our picture part.

    Making use of Filters

    Now that the body loop is working. we are able to re-implement our filters from the start and apply them to our webcam stream. Within the update_frame operate, we are able to test the present filter variable and apply the corresponding filter operate.

        def update_frame(self) -> None:
            ...
            
            # Get the chosen filter sort
            filter_id = self.filter_var.get()
            filter_type = filter_types[filter_id]
    
            if filter_type == "grayscale":
                body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
            elif filter_type == "blur":
                body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
            elif filter_type == "threshold":
                grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
                _, body = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
            elif filter_type == "canny":
                body = cv2.Canny(body, threshold1=100, threshold2=200)
            elif filter_type == "sobel":
                body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
            elif filter_type == "laplacian":
                body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
            elif filter_type == "regular":
                cross
    
            if body.dtype != np.uint8:
                # Scale the body to uint8 if crucial
                cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
                body = body.astype(np.uint8)
            if len(body.form) == 2:  # Convert grayscale to BGR
                body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
            
            self.image_display.set_frame(body)
    
            self.after(10, self.update_frame)

    And now we’re again to the complete performance of the appliance, you possibly can choose any filter on the left facet and it will likely be utilized in real-time to the webcam feed!

    Multithreading and Synchronization

    Now though the appliance runs as is, there are some issues with the present approach we run our body loop. Presently every little thing runs in a single thread, the primary GUI thread. This is the reason to start with, we don’t instantly see the window pop up, our webcam initialization blocks the primary thread. Now think about, if we did some heavier picture processing, possibly working the pictures by means of neural community, you wouldn’t need your person interface to all the time be blocked whereas the community is working inference. This can result in a really unresponsive person expertise when clicking the UI parts!

    A greater strategy to deal with this in our utility is to separate the picture processing from the person interface. Typically that is nearly all the time a good suggestion to separate your GUI logic from any sort of non-trivial processing. So in our case, we’ll run a separate thread that’s liable for the picture loop. It’ll learn the frames from the webcam stream and apply the filters.

    NOTE: Python threads will not be “actual” threads in a way that they don’t have the potential to run on totally different logical cpu cores and therefore won’t actually run in parallel. In Python multithreading the context will swap between the threads, however because of the GIL, the worldwide interpreter lock, a single python course of can solely run one bodily thread. If you’d like “actual” parallel processing, you will want to make use of multiprocessing. Since our course of right here just isn’t CPU certain however truly I/O certain, multithreading suffices.

    class App(customtkinter.CTk):
        def __init__(self) -> None:
            ...
    
            self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
            self.webcam_thread.begin()
    
        def run_webcam_loop(self) -> None:
            """
            Run the webcam loop in a separate thread.
            """
            self.cap = cv2.VideoCapture(0)
            if not self.cap.isOpened():
                return
    
            whereas True:
                ret, body = self.cap.learn()
                if not ret:
                    break
    
                # Filters
                ...
    
                self.image_display.set_frame(body)

    For those who run this, you’ll now see that our window opens up instantly and we even see our loading textual content whereas the webcam stream is opening up. Nonetheless, as quickly because the stream begins, the frames begin to flicker. Relying on plenty of elements, you would possibly expertise totally different visible artifacts or errors at this stage.

    Warning: flashing picture

    Now why is that this taking place? The issue is that we’re concurrently attempting to replace the brand new body whereas the inner refresh loop of the person interface may be utilizing the knowledge of the array to attract it on the display. They’re each competing for a similar body array.

    It’s typically not a good suggestion to immediately replace the UI parts from a unique thread, in some frameworks this would possibly even be prevented and can increase exceptions. In Tkinter, we are able to do it, however we’ll get bizarre outcomes. We want some sort of synchronization between our threads. That’s the place the Queue comes into play.

    You’re most likely acquainted with queues from the grocery retailer or theme parks. The idea of the queue right here could be very comparable: the primary aspect that goes into the queue additionally leaves first (First In First Out).

    On this case, we truly simply desire a queue with a single aspect, a single slot queue. The queue implementation in Python is thread-safe, which means we are able to put and get objects from the queue from totally different threads. Good for our use case, the processing thread will put the picture arrays to the queue and the GUI thread will attempt to get a component, however not block if the queue is empty.

    class App(customtkinter.CTk):
        def __init__(self) -> None:
            ...
    
            self.queue = queue.Queue(maxsize=1)
    
            self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
            self.webcam_thread.begin()
    
            self.frame_loop_dt_ms = 16  # ~60 FPS
            self.after(self.frame_loop_dt_ms, self._update_frame)
        
        def _update_frame(self) -> None:
            """
            Replace the body within the picture show widget.
            """
            strive:
                body = self.queue.get_nowait()
                self.image_display.set_frame(body)
            besides queue.Empty:
                cross
    
            self.after(self.frame_loop_dt_ms, self._update_frame)
    
        def run_webcam_loop(self) -> None:
            ...
    
            whereas True:
                ...
    
                self.queue.put(body)

    Discover how we transfer the direct name to the set_frame operate from the webcam loop which runs in its personal thread to the _update_frame operate that’s working on the primary thread, repeatedly scheduled in 16ms intervals.

    Right here it’s necessary to make use of the get_nowait operate in the primary thread, in any other case if we might use the get operate, we might be blocking there. This name does not block, however raises a queue.Empty exception if there’s no aspect to fetch so we now have to catch this and ignore it. Within the webcam loop, we are able to use the blocking put operate as a result of it doesn’t matter that we block the run_webcam_loop, there’s nothing else needing to run there.

    And now every little thing is working as anticipated, no extra flashing frames!

    Conclusion

    Combining a UI framework like Tkinter with OpenCV permits us to construct fashionable trying functions with an interactive graphical person interface. Because of the UI working in the primary thread, we run the picture processing in a separate thread and synchronize the information between the threads utilizing a single-slot queue. You’ll find a cleaned up model of this demo within the repository under with a extra modular construction. Let me know should you construct one thing attention-grabbing with this strategy. Take care!



    Checkout the complete supply code within the GitHub repo:

    https://github.com/trflorian/ctk-opencv




    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGeospatial Machine Learning. Episode 13: Handling Imbalanced Classes… | by Williams Adaji-Agbane | May, 2025
    Next Article Lawn Care CEO’s Tips for Customer Relationships
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    How AI Agents “Talk” to Each Other

    June 14, 2025
    Artificial Intelligence

    Stop Building AI Platforms | Towards Data Science

    June 14, 2025
    Artificial Intelligence

    What If I had AI in 2018: Rent the Runway Fulfillment Center Optimization

    June 14, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    Capital gains tax break for investing in Canada makes sense

    March 31, 2025

    Recommendation System. A recommendation system is like a… | by TechieBot | Master the concepts in Machine Learning | Jun, 2025

    June 7, 2025

    Why Data Scientists Should Care about Containers — and Stand Out with This Knowledge

    February 20, 2025

    The Mirror Protocol: A New Way to Become Human in the Age of AI | by Alex Ronald David Carter | May, 2025

    May 17, 2025

    The Shared Responsibility Model: What Startups Need to Know About Cloud Security in 2025

    May 20, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    When Physics Meets Finance: Using AI to Solve Black-Scholes

    April 18, 2025

    Kernel Case Study: Flash Attention

    April 3, 2025

    Think You Know AI? Nexus Reveals What Everyone Should Really Know | by Thiruvarudselvam suthesan | Jun, 2025

    June 3, 2025
    Our Picks

    Airbnb CEO Brian Chesky’s One Rule for Remote, Hybrid Work

    February 10, 2025

    AI platforms for secure, on-prem delivery

    May 8, 2025

    FedEx Deploys Hellebrekers Robotic Sorting Arm in Germany

    June 13, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.