
The way people consume media and use apps has fundamentally changed in the era of second-screen culture. It’s now common to see someone watching a TV show while simultaneously scrolling through social media on their phone, or playing a game on a tablet while listening to a podcast. In fact, around 88% of Americans use a second device while watching TV, and globally about 86% of internet users report this kind of multi-device multitasking.
This shift in user behavior has big implications for how apps should be designed and developed. App developers need to understand that their users are often dividing attention across multiple screens, and they must adapt to keep those users engaged. Below, we explore how second-screen habits are influencing app experiences and current development trends– from gaming platforms enabling layered interaction to new hardware built for multitasking, and how developers can respond (with some help from AI) to this cultural trend.
Gaming Apps Encourage Layered Interaction

Gamers are arguably the power users of multitasking. Many PC gamers, for instance, will play a game on one monitor while streaming music or chatting on Discord on another. Mobile gamers and online gamblers similarly juggle multiple activities – think of a poker player in an online tournament who also has a side game running. This phenomenon is sometimes called layered interaction, where multiple interactive experiences run in parallel. It’s no surprise that gaming platforms have been early adopters of this approach, letting players stack activities for maximum engagement.
This approach is not surprising at all, as gaming platforms have always been at the forefront of adopting tech innovations and showing creativity in platform–gamer interactions. The well-known gaming website Ignition Poker USA, in fact, promotes layered interactions, presenting it as one of the advantages of their platform. The message is—if you have a favorite casino game besides poker, you can easily play it simultaneously, directly within the poker interface, without having to leave the game. But how is this possible technically?
From a technical standpoint, modern game clients are built to handle concurrent processes and embedded content. In the Ignition Poker example, the platform’s software update introduced a “casino-in-poker” feature that lets a user spin slot reels right inside the poker lobby while their poker game continues. Essentially, the client runs multiple game modules in parallel – multi-threaded architecture and efficient resource management ensure neither game lags. The poker interface might simply spawn the slot game as a separate layer or window within the app, thanks to today’s powerful mobile processors and advanced UI frameworks. Ignition’s platform, for instance, allows launching casino mini-games directly from the poker client without ever opening a separate app or browser. This kind of seamless layering means a player can multi-table poker hands and enjoy a quick blackjack hand or a roulette spin at the same time, all in one unified experience.
The result is higher engagement – there’s never a dull moment if you can double up activities. (Indeed, one study found that viewers using dual screens spent 30% more time interacting with content compared to single-screen viewers , highlighting how layering can boost engagement.)
Hardware Innovations Fueling Multi-Tasking
The tech industry isn’t blind to this cultural shift – in fact, hardware is rapidly evolving to facilitate multi-screen and multi-app use. Smartphones have grown in size and capability over the past decade precisely to accommodate more multitasking. Large-screen phones (often jokingly called “phablets” in the past) are now the norm, and foldable devices like Samsung’s Galaxy Z Fold series literally unfold into small tablets to give users more screen real estate. On a device like the Galaxy Z Fold, a user can comfortably run two or three apps side by side.
Samsung’s software even supports up to three apps in a split view plus a floating pop-up window at once – a testament to how seriously manufacturers are taking multi-tasking. In practical terms, you could be videocalling in one panel, browsing the web in a second, and messaging in a third. And the trend isn’t stopping there: Samsung’s mobile chief recently confirmed they are “working hard on a tri-fold smartphone with the goal of launching it at the end of this year”. Rumors suggest this tri-fold device will unfurl into a 10-inch tablet-like display , effectively putting a full multi-window desktop experience in your pocket. Clearly, the industry expects that users want to do even more at once on a single device.
For app developers, these hardware advances mean your app must play nicely in a multi-window, multi-device environment. Designing for foldables, resizable windows, and split-screen support is becoming an essential part of Mobile App Development. On modern Android and iOS, users can split their screen between apps or slide one app over another. If your app isn’t built to be resizable or doesn’t support multi-window operation, users will notice the friction. Google and Samsung provide guidelines encouraging developers to make apps responsive to different screen sizes and orientations, and to enable features like continuity (so an app seamlessly transitions from a small external screen to a big internal screen on a foldable).
It’s highly recommended to support these modes for an optimal experience on foldables . Otherwise, an app might default to a stretched or “compatibility mode” view, or worse – it might restart when the screen mode changes, causing the user to lose their progress. Supporting multi-window also means thinking about pausing and resuming: your app could be running alongside two others, or in the background while the user focuses on a different screen. It should handle that gracefully – for example, a video app should continue playing (perhaps in a small picture-in-picture window) if the user checks a message, and a note-taking app should autosave state when not in focus.
Developers may even consider multi-instance support, where the user can launch two separate instances of your app (imagine a user wanting to compare two documents in the same app side by side). This level of flexibility is becoming important on large screens. The bottom line: as hardware enables more multitasking, apps must adapt their UI/UX for split attention. Testing your app on a foldable or in a simulated multi-window setup is now as crucial as testing on different browsers was in the past.
Designing for the Second-Screen User (and How AI Can Help)
Notifications or calls-to-action should be timed and designed thoughtfully – if you know the user might be watching something else, a well-timed prompt (perhaps during a lull in the primary content) can bring them back to your app. In the realm of streaming and media, as well as platforms offering casino games, developers have started integrating features that complement what’s on the “first screen.”
Artificial intelligence is playing a big role in creating smart, responsive app experiences. One useful tool is Automatic Content Recognition (ACR), which can tell what you’re watching or listening to on another device. For example, your phone app could “hear” a TV show you’re watching and instantly show related content, like character info or live polls. This kind of real-time syncing makes the app feel like a helpful companion to your main screen.
Consider e-commerce: 65% of viewers have used a second screen to look up products seen in TV ads. An app that knows what ad just played on the smart TV could instantly show the product details or a coupon – turning that impulse into a quick conversion. AI can power these features by crunching viewer data and linking content across platforms. As one industry guide notes, leveraging viewer data from second screens allows apps to deliver tailored content and even targeted ads in sync with the user’s context. In short, AI helps apps anticipate the user’s split attention and provide value exactly when and where it’s needed.