Implementation/Infrastructure Support for Collaborative Applications

of 311

Please download to get full document.

View again

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
PDF
311 pages
0 downs
1,037 views
Share
Description
Implementation/Infrastructure Support for Collaborative Applications. Prasun Dewan. Infrastructure vs. Implementation Techniques. Implementation technique are interesting when general Applies to a class of applications Coding of such an implementation technique is infrastructure.
Transcript
Implementation/Infrastructure Support for Collaborative ApplicationsPrasun DewanInfrastructure vs. Implementation Techniques
  • Implementation technique are interesting when general
  • Applies to a class of applications
  • Coding of such an implementation technique is infrastructure.
  • Sometimes implementation techniques apply to very narrow app set
  • Operation transformation for text editors.
  • These may not qualify as infrastructures
  • Will study implementation techniques applying to small and large application sets.
  • Collaborative ApplicationCouplingCouplingInfrastructure-Supported SharingClientSharing InfrastructureCouplingCouplingNLS (Engelbart ’68)Colab (Stefik ’85)VConf (Lantz ‘86)Rapport (Ahuja ’89)XTV (Abdel-Wahab, Jeffay & Feit ‘91)Rendezvous (Patterson ‘90)Suite (Dewan & Choudhary ‘92)TeamWorkstation (Ishii ’92)Weasel (Graham ’95)Habanero (Chabert et al ‘ 98) JCE (Abdel-Wahab ‘99)Disciple (Marsic ‘01) Post XeroxXeroxStanfordBell LabsUNC/ODUBellcorePurdueJapanQueensU. IllinoisODURutgersSystems: InfrastructuresVNC (Li, Stafford-Fraser, Hopper ’01)NetMeeting GrooveAdvanced RealityLiveMeeting (Pay by minute service model) Webex (service model)ATT ResearchMicrosoftMicrosoftSystems: ProductsIssues/Dimensions
  • Architecture
  • Session management
  • Access control
  • Concurrency control
  • Firewall traversal
  • Interoperability
  • Composability
  • Concurrency ControlSession ManagementArchitecture ModelColab. Sys. 2 Implementation 3Colab. Sys. 1 Implementation 1Infrastructure-Supported SharingClientSharing InfrastructureCouplingCouplingArchitecture?Infrastructure/client (logical) componentsComponent (physical) distributionNear-WYSIWISCouplingShared Window Logical ArchitectureApplicationWindowWindowCentralized Physical ArchitectureXTV (‘88) VConf (‘87) Rapport (‘88) NetMeetingX ClientInput/Output Pseudo Server Pseudo Server X Server X ServerUser 1User 2Replicated Physical ArchitectureRapport VConfX ClientX ClientInput Pseudo Server Pseudo Server X Server X ServerUser 1User 2Near-WYSIWISCouplingRelaxing WYSIWIS?ApplicationWindowWindowModel-View Logical ArchitectureModelViewViewWindowWindowCentralized Physical ModelRendezvous (‘90, ’95)ModelViewViewWindowWindowReplicated Physical ModelSync ’96, Groove ModelInfrastructureModelViewViewWindowWindowAppAppAppInputI/O Pseudo Server Pseudo Server Pseudo Server Pseudo Server Window Window Window WindowComparing the ArchitecturesArchitecture Design Space?ModelModelModelViewViewViewViewWindowWindowWindowWindowArchitectural Design Space
  • Model/ View are Application-Specific
  • Text Editor Model
  • Character String
  • Insertion Point
  • Font
  • Color
  • Need to capture these differences in architecture
  • Layer 0Layer 1Layer 1I/O LayersLayer N-1Layer N-1Layer NLayer NSingle-User Layered Interaction Layer 0Increasing AbstractionCommunication LayersPCPhysical DevicesLayer 1Layer N-1Layer NPCSingle-User Interaction Layer 0Increasing AbstractionPCWidgetWindowFramebufferPCExample I/O Layers ModelIncreasing Abstraction{“John Smith”, 2234.57}Interactor = Absrtraction Representation+ Syntactic SugarLayered Interaction with an ObjectAbstractionInteractor/Abstraction
  • John Smith
  • John Smith
  • Interactor/AbstractionX
  • John Smith
  • InteractorLayer 1Layer N-1Layer NPCSingle-User Interaction Layer 0Increasing AbstractionProgram ComponentShared LayerLayer SLayer S+1User-Interface ComponentLayer NPCIdentifying the Shared LayerHigher layers will also be shared Layer 0Increasing AbstractionLower layers may divergeLayer S+1Layer S+1Layer S+1Layer NLayer NLayer NPCPCPCReplicating UI Component Layer 0Layer SLayer S+1Layer S+1Layer S+1Layer NLayer NLayer NPCPCPCCentralized Architecture Layer 0 Layer 0 Layer 0Layer SLayer SLayer SLayer S+1Layer S+1Layer S+1Layer NLayer NLayer NPCPCPCReplicated (P2P) Architecture Layer 0Layer SMaster Input Relayer Output BroadcasterSlave I/O RelayerSlave I/O RelayerLayer S+1Layer S+1Layer S+1Layer NLayer NLayer NImplementing Centralized ArchitecturePC Layer 0 Layer 0 Layer 0Layer SLayer SLayer SInput BroadcasterInput BroadcasterInput BroadcasterLayer S+1Layer S+1Layer S+1Layer NLayer NLayer NReplicated ArchitecturePCShared LayerRep vs. CentralClassifying Previous Work
  • XTV
  • NetMeeting App Sharing
  • NetMeeting Whiteboard
  • Shared VNC
  • Habanero
  • JCE
  • Suite
  • Groove
  • LiveMeeting
  • Webex
  • Shared LayerRep vs. CentralClassifying Previous Work
  • Shared layer
  • X Windows (XTV)
  • Microsoft Windows (NetMeeting
  • App Sharing)
  • VNC Framebuffer (Shared VNC)
  • AWT Widget (Habanero, JCE)
  • Model (Suite, Groove, LiveMeeting)
  • Replicated vs. centralized
  • Centralized (XTV, Shared VNC, NetMeeting App. Sharing, Suite, PlaceWare)
  • Replicated (VConf, Habanero, JCE, Groove, NetMeeting Whiteboard)
  • Service vs. Server vs. Local Commuication
  • Local: User site sends data
  • VNC, XTV, VConf, NetMeeting Regular
  • Server: Organization’s site connected by LAN to user site sends data
  • NetMeeting Enterprise, Sync
  • Service: External sites connected by WAN to user site sends data
  • LiveMeeting, Webex
  • Push vs. Pull of Data
  • Consumer pulls new data by sending request for it in response to
  • notification
  • MVC
  • receipt of previous data
  • VNC
  • Producer pushes data for consumers
  • As soon as data are produced
  • NetMeeting, Real-time sync
  • When user requests
  • Asynchronous Sync
  • Dimensions
  • Shared layer level.
  • Replicated vs. Centralized.
  • Local vs. Server vs. Service Broadcast
  • Push vs. Pull Data
  • Coupling FlexibilityAutomation Ease of LearningReuseInteroperabilityFirewall traversalConcurrency and correctness SecurityPerformanceBandwidth usageComputation loadScalingJoin/leave timeResponse timeFeedback to actorLocalRemoteFeedthrough to observersLocalRemoteTask completion time Evaluating design space pointsSharing a layer nearer the dataGreater view independenceBandwidth usage lessFor large data sometimes visualization is compact.Finer-grained access and concurrency controlShared window system support floor control.Replication problems better solved with more app semanticsMore on this later.Sharing a layer nearer the physical deviceHave referential transparencyGreen object no meaning if objects colored differentlyHigher chance layer is standard.Sync vs. VNCpromotes reusability and interoperabilitySharing Low-Level vs. High-Level Layer
  • Sharing flexibility limited with fixed layer sharing
  • Need to support multiple layers.
  • CSCWInput immediately delivered without distributed commitment.Floor control or operation transformation for correctnessDistributed computing:More reads (output) favor replicatedMore writes (input) favor centralizedCentralized vs. Replicated: Dist. Comp. vs. CSCWBandwidth Usage in Replicated vs. Centralized
  • Remote I/O bandwidth only an issue when network bandwidth < 4MBps (Nieh et al ‘2000)
  • DSL link = 1 Mbps
  • Input in replication less than output
  • Input produced by humans
  • Output produced by faster computers
  • Feedback in Replicated vs. Centralized
  • Replicated: Computation time on local computer
  • Centralized
  • Local user
  • Computation time on local computer
  • Remote user
  • Computation time on hosting computer plus roundtrip time
  • In server/ service model an extra LAN/ WAN link
  • Influence of communication cost
  • Window sharing remote feedback
  • Noticeable in NetMeeting.
  • Intolerable in PlaceWare’s service model.
  • Powerpoint presentation feedback time
  • not noticeable in Groove & Webex replicated model.
  • noticeable in NetMeeting for remote user.
  • Not typically noticeable in Sync with shared model
  • Depends on amt of communication with remote site
  • Which depends on shared layer
  • Case Study: Colab. Video ViewingCase Study: Collaborative Video Viewing (Cadiz, Balachandran et al. 2000)
  • Two users collaboratively executing media player commands
  • Centralized NetMeeting sharing added unacceptable video latency
  • Replicated architecture created using T 120 later
  • Part of problem in centralized system sharing video through window layer
  • Influence of Computation Cost
  • Computation intensive apps
  • Replicated case: local computer’s computation power matters.
  • Central case: central computer’s computation power matters
  • Central architecture can give better feedback, specially with fast network [Chung and Dewan ’99]
  • Asymmetric computation power => asymmetric architecture (server/desktop, desktop/PDA)
  • Feedthrough
  • Time to show results at remote site.
  • Replicated:
  • One-way input communication time to remote site.
  • Computation time on local replica
  • Centralized:
  • One-way input communication time to central host
  • Computation time on central host
  • One-way output communication time to remote site.
  • Server/service model add latency
  • Less significant than remote feedback:
  • Active user not affected.
  • But must synchronize with audio
  • “can you see it now?”
  • Task completion time
  • Depends on
  • Local feedback
  • Assuming hosting user inputs
  • Remote feedback
  • Assuming non hosting user inputs
  • Not the case in presentations, where centralized favored
  • Feedthrough
  • If interdependencies in task
  • Not the case in brainstorming, where replicated favored
  • Sequence of user inputs
  • Chung and Dewan ’01
  • Used Mitre log of floor exchanges and assumed interdependent tasks
  • Task completion time usually smaller in replicated case
  • Asymmetric centralized architecture good when computing power asymmetric (or task responsibility asymmetric?).
  • Scalability and Load
  • Centralized architecture with powerful server more suitable.
  • Need to separate application execution with distribution.
  • PlaceWare
  • Webex
  • Related to firewall traversal. More later.
  • Many collaborations do not require scaling
  • 2-3 collaborators in joint editing
  • 8-10 collaborators in CAD tools (NetMeeting Usage Data)
  • Most calls are not conference calls!
  • Adapt between replicated and centralized based on # collaborators
  • PresenceAR goals
  • Display Consistency
  • Not an issue with floor control systems.
  • Other systems must ensure that concurrent input should appear to all users to be processed in the same (logical) order.
  • Automatically supported in central architecture.
  • Not so in replicated architectures as local input processed without synchronizing with other replicas.
  • Synchronization ProblemsabcabcdabcaebcdeabcdaebcInsert d,1Insert e,2Insert d,1Insert e,2ProgramProgramInsert e,2Insert d,1Input DistributorInput Distributor UI UIUser 1User 2Peer to peer MergerabcabcdabcaebcdaebcdaebcInsert d,1Insert e,2Insert d,1Insert e,2ProgramProgramInsert e,3Insert d,1Input DistributorInput DistributorMergerMerger UI UIEllis and Gibbs ‘89, Groove, …User 1User 2abcabcdabcaebcdaebcdaebcInsert d,1Insert e,2Insert d,1Insert e,2Local and Remote MergerMerger
  • Curtis et al ’95, LiveMeeting, Vidot ‘02
  • Feedthrough via extra WAN Link
  • Can recreate state through central site
  • ProgramProgramInsert e,3Insert d,1Input DistributorInput DistributorMergerMerger UI UIUser 1User 2abcabcdabcaebcdaebcdaebcInsert d,1Insert e,2Insert d,1Insert e,2Centralized MergerMerger
  • Munson & Dewan ‘94
  • Asynchronous and synchronous
  • Blocking remote merge
  • Understands atomic change set
  • Flexible remote merge semantics
  • Modify or delete can win
  • ProgramProgramInsert e,3Insert d,1Input DistributorInput Distributor UI UIUser 1User 2Merging vs. Concurrency Control
  • Real-time Merging called Optimistic Concurrency Control
  • Misnomer because it does not support serializability.
  • More on this later.
  • Reading Centralized Resourcesread “f”read” “f”read “f”fabCentral bottleneck!ProgramProgramInput DistributorInput DistributorRead file operation executed infrequently UI UIUser 1User 2Writing Centralized Resourceswrite “f”, “c”write” “f”, “c”write“f”, “c”fababccMultiple writesProgramProgramInput DistributorInput Distributor UI UIUser 1User 2write f, “c”write “f”, “c”write“f”, “c”Replicating Resourcesfabccfabcc
  • Groove Shared Space &Webex replication
  • Pre-fetching
  • Incremental replication (diff-based) in Groove
  • ProgramProgramInput DistributorInput Distributor UI UIUser 1User 2mail joe, msgmail joe, msgmail joe, msgNon Idempotent Operationsmsg msgProgramProgramInput DistributorInput Distributor UI UIUser 1User 2Program’insert d, 1insert d, 1insert d,1Separate Program Componentmsg
  • Groove Bot: Dedicated machine for external access
  • Only some users can invite Bot in shared space
  • Only some users can invoke Bot functionality
  • Bot data can be given only to some users
  • Similar idea of special “externality proxy” in Begole 01
  • mail joe, msgProgramProgramInput DistributorInput Distributor UI UIUser 1User 2Program++mail joe, msgmail joe, msgTwo-Level Program Componentmsg
  • Dewan & Choudhary ’92, Sync, LiveMeeting
  • Extra comm. hop and centralization
  • Easier to implement
  • insert d,1insert d,1ProgramPrograminsert d,1 UI UIUser 1User 2Shared LayerRep vs. CentralClassifying Previous Work
  • Shared layer
  • X Windows (XTV)
  • Microsoft Windows (NetMeeting
  • App Sharing)
  • VNC Framebuffer (Shared VNC)
  • AWT Widget (Habanero, JCE)
  • Model (Suite, Groove, PlaceWare,)
  • Replicated vs. centralized
  • Centralized (XTV, Shared VNC, NetMeeting App. Sharing, Suite, PlaceWare)
  • Replicated (VConf, Habanero, JCE, Groove, NetMeeting Whiteboard)
  • Layer-specific
  • So far, layer-independent discussion.
  • Now concrete layers to ground discussion
  • Screen sharing
  • Window sharing
  • Toolkit sharing
  • Model sharing
  • draw a, w1, x, ydraw a, w2, x, ydraw a, w3, x, yCentralized Window ArchitectureWindow Clienta ^, w1, x, ydraw a, w1, x, yOutput Broadcaster & I/O Relayerdraw a, w, x, ya ^, w, x, y I/O Relayer I/O Relayera ^, w2, x, yWin. ServerWin. ServerWin. ServerPress aUser 1User 2User 3UI Coupling in Centralized ArchitectureWindow Client
  • Existing approach
  • T 120, PlaceWare
  • UI coupling need not be supported
  • XTV
  • Output Broadcaster & I/O Relayer++move w3move w I/O Relayer I/O Relayermove w3move w2move w1Win. ServerWin. ServerWin. Servermove w2User 1User 2User 3Distributed Architecture for UI CouplingWindow Client
  • Need multicast server at each LAN
  • Can be supported by T 120
  • Output Broadcaster & I/O Relayer++move w1 I/O Relayer ++ I/O Relayer++move w3move w1move w1move wWin. ServerWin. ServerWin. Servermove w1User 1User 2User 3Replicate d in S byS-1 sending input events to all S instancesS sending events directly to all peersDirect communication allows partial sharing (e.g. windows)Harder to implement automatically by infrastructureTwo Replication Alternatives S S S SS -1S -1S -1S -1Semantic Issue
  • Should window positions be coupled?
  • Leads to window wars (Stefik et al ’85)
  • Can uncouple windows
  • Cannot refer to the “upper left” shared window
  • Compromise
  • Create a virtual desktop for physical desktop of a particular user
  • UI Coupling and Virtual DesktopKnows about virtual desktopdraw a, x’, y’draw a, w1, x, ydraw a, w2, x’, y’draw a, w3, x’, y’Raw Input with Virtual DesktopWindow Clienta ^, w1, x, ydraw a, w1, x, yOutput Broadcaster I/O Relayer & VDa ^, x’, y’VD & I/O Relayer VD & I/O Relayera ^, w2, x’, y’Win. ServerWin. ServerWin. ServerPress aUser 1User 2User 3Translation without Virtual DesktopWindow Clienta ^, w1, x, ydraw a, w1, x, yOutput Broadcaster, I/O Relayer & Translatora ^, w1, x, yI/O RelayerI/O Relayerdraw a, w1, x, ya ^, w2, x, ydraw a, w2, x, ydraw a, w3, x, yWin. ServerWin. ServerWin. ServerPress aUser 1User 2User 3Coupled Expose Events: NetMeetingCoupled Exposed RegionsWindow Clientexpose wdraw wT 120 (Virtual Desktop)Output Broadcaster I/O Relayer & VDexpose wVD & I/O Relayer VD & I/O Relayerexpose wexpose wdraw wdraw wdraw wWin. ServerWin. ServerWin. Serverfront w3User 1User 2User 3Coupled Expose Events: PlaceWareUncoupled Expose Events
  • XTV (no Virtual Desktop)
  • expose event not broadcast so remote computers do not blacken region
  • Potentially stale data
  • Window Clientexpose wdraw wOutput Broadcaster I/O Relayer & VDexpose wVD & I/O Relayer VD & I/O Relayerdraw wdraw wdraw wWin. ServerWin. ServerWin. Serverfront w3User 1User 2User 3Uncoupled Expose Events
  • Centralized collaboration-transparent app draws to areas of last user who sent expose event.
  • May only sent local expose events
  • If it redraws the entire window anyway everyone is coupled.
  • If it draws only exposed areas.
  • Send the draw request only to inputting user
  • Would work as long unexposed but visible regions not changing.
  • Assumes draw request can be associated with expose event.
  • To support this accurately, system needs to send it union of exposed regions received from multiple users
  • MandatoryWindow sizesWindow contentsOptionalWindow positionsWindow stacking orderWindow exposed regionsOptional can be done with or without virtual desktopRemote and local windows could mix, rather than have remote windows embedded in Virtual Desktop window.Can lead to “window wars” (Stefik et al ’87)Couplable propertiesSizeContentsPositionsStacking orderExposed regionsIn shared window system some must be coupled and others may be.Window-based CouplingExample of Minimal Window CouplingReplicated Window ArchitectureProgramProgramPrograma ^, w1, x, ya ^, w2, x, ya ^, w3, x, ya ^, w1, x, ya ^, w3, x, yInput DistributorInput DistributorInput Distributora ^, w2, x, ydraw a, w2, x, ydraw a, w2, x, ydraw a, w3, x, y UI UI UIPress aUser 1User 2User 3Replicated Window Architecture with UI CouplingProgramProgramProgrammove wmove wInput BroadcasterInput BroadcasterInput Broadcastermove wmove wmove w UI UI UImove wUser 1User 2User 3Replicated Window Architecture with Expose couplingProgramProgramProgramexpose wexpose wexpose wexpose wexpose wInput DistributorInput DistributorInput Distributorexpose wdraw wdraw wdraw w UI UI UImove w2User 1User 2User 3Replicated Window System
  • Centralized only implemented commercially
  • NetMeeting
  • PlaceWare
  • Webex
  • Replicated can offer more efficiency and pass through firewalls limiting large traffic
  • Must be done carefully to avoid correctness problems
  • Harder but possible at window layer
  • Chung and Dewan ’01
  • Assume floor control as centralized systems do
  • Also called intelligent app sharing
  • Screen Sharing
  • Sharing the screen client
  • Window system (and all applications running on top of it)
  • Cannot share windows of subset of apps
  • Share complete computer state
  • Lowest layer gives coarsest sharing granularity.
  • Sharing the (VNC) Framebuffer LayerVNC Centralized Frame Buffer SharingWindow ClientWin. ServerOutput Broadcaster & I/O Relayerdraw pixmap rect (frame diffs)key eventsmouse events I/O Relayer I/O RelayerFramebufferFramebufferFramebufferReplicated Screen Sharing?
  • Replication hard if not impossible
  • Each computer runs a framebuffer server and shared input
  • Requires replication of entire computer state
  • Either all computers are identic
  • Related Search
    We Need Your Support
    Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

    Thanks to everyone for your continued support.

    No, Thanks