
Beginning ActionScript 2.0 2006
.pdf
Chapter 20
connect method from attempting a persistent connection to a specific remote server. Then create a netStream object, which gives you controls for handling the actual data connection.
The video object you added to the stage is then associated with the netStream object. When the play method is called, netStream sends the data to the video object.
This might seem like a few too many hoops to jump through just to show a video. However, the three objects together are giving you granular access and control to the entire process from declaring the remote server, stream manipulation, to actual video play, and video control. If you are looking for something more streamlined or working with many clips in different places in your application, the process is easily made into a class.
This method works only in Flash 7 when the Spark codec is used. This method will not work in Flash 6 or lower. Flash 8 can use this method with both On2 and Spark codecs.
Controlling Video Position
You have just seen how you can use the netStream object to play videos directly on the stage and not worry about total frame length, frame rate, and sound position. Now you’re concentrating on manipulating the video with some controls. As with previous Try It Out examples, you use components to quickly set up controls to highlight a few of the methods available.
There is also the Media component. This component is an all-in-one audio and video player. Nothing can beat the speed at which you can add an FLV file to your application than with the Media component. You are encouraged to try out the Media component. For the purposes of this book, however, you will learn how to do some of these things by hand so that you can have more customized control over how your video is displayed and controlled.
Try It Out |
Creating Simple Controls |
If you have not completed the previous Try It Out example, you should do that now. You’ll be building upon the FLA you created by adding some controls.
1.Open the loadFLVvideo.fla file.
2.You’ll be using controls to manipulate your file. You’ll want to grab some detail information about your FLV file via the netStream onMetaData event. Click the first frame in the timeline and open the Actions panel. Add the pause method so that your ActionScript in frame 1 now looks like the following:
var myNetConnect:NetConnection = new NetConnection(); myNetConnect.connect(null);
var myNetStream:NetStream = new NetStream(myNetConnect); myNetStream.onMetaData = function(meta) {
myNetStream.duration = meta.duration;
myNetStream.VideoRate = meta.videodatarate; myNetStream.AudioRAte = meta.audiodatarate;
}; clip_video.attachVideo(myNetStream); myNetStream.play(“myVideo.flv”);
508

Managing Video
3.Test your SWF now, and you will see that the video is buffering but not playing. Start adding some controls. Open the Components panel and drag a ProgressBar component onto the stage and place it directly below your video object. Open the Properties panel for the stage instance of the ProgressBar component. Name the instance clipProgress. In the Properties panel for the component, change the width of the progress bar to 250. In the Parameters tab of the Properties panel, remove the String value of the Label.
4.Add the following code to frame 1 beneath the code you’ve already written:
myNetStream.play(“http://www.memoryprojector.com/jeff/myVideo.flv “); clipProgress.mode = “manual”;
trackProgress = function () { clipProgress.setProgress(myNetStream.time, myNetStream.duration);
};
track = setInterval(this, “trackProgress”, 10); _level0.clipProgress.onPress = function() {
var perc = ((_xmouse-this._x)/(this._width)); myNetStream.seek(int(perc*myNetStream.duration));
};
5.Test your SWF to see that a progress bar now shows you the current playhead position of the video. Clicking the ProgressBar sends the video to that position in the video.
6.Open the Components panel and drag a Button component onto the stage directly beneath the ProgressBar component you added earlier. In the Properties panel for the button instance you just added to the stage, change the name to playButton. In the Parameters tab of the Properties panel, change the Label to Play. Change the width of the button to 40 pixels.
7.Add the following ActionScript to the code after the code that exists on frame 1:
playButton.enabled = false; playButton.onRelease = function() {
this.enabled = false;
stopButton.enabled = true; myNetStream.pause();
};
8.Open the Components panel and drag a Button component onto the stage next to the play button you added earlier. In the Properties panel for the button instance you just added to the stage, change the name to stopButton. In the Parameters tab of the Properties panel, change the Label to Stop. Change the width of the button to 40 pixels.
9.Add the following ActionScript to the end of the code on frame 1:
stopButton.onRelease = function() { this.enabled = false; playButton.enabled = true; myNetStream.pause();
};
10.Test your SWF file. You can now pause and play your movie. You can also use the seek method of the ProgressBar component to move through the track.
How It Works
In this example you saw how using small amounts of ActionScript can allow you granular control over the video.
509

Chapter 20
You also saw how the onMetaData event can help you create a user interface that can keep track of the file’s play progress. By using the netStream object’s time property against the netStream’s onMetaData duration property you can display a progress bar that accurately shows the current playhead position.
You also used the seek method of the netStream object. By using seek, you move to the nearest keyframe to the position being sought. If you had few keyframes, this would have felt chunky or inaccurate. Because you have keyframes at regular intervals and allowed the encoder to automatically add keyframes where necessary for quality, the seek is accurate enough.
You also used the pause method of the netStream object. You can see that this object is called in the play and stop buttons. This is because the method is a toggle that can continue or pause a video. The play method of the netStream object is used exclusively to begin rebuffering the track from the beginning, and so you only called it once to load the video.
Probably the fundamental lesson to be learned here is that netStream contains the bulk of methods used to control a streaming netConnection object. You could have defined the netConnection to attachAudio, instead of attachVideo, and your netSream method calls would work just the same. In this way the control methods of the media are abstracted and can control any media being throughput into the netConnection object. This gives you unparalleled freedom and simplicity in defining classes and scripts that control the media in your applications.
The netStream and video object play controls are unlike any other objects in Flash that control media. This is unfortunate, but after you start to use them and understand the differences and why they exist, you’ll probably agree that the netStream object is very logical and easy.
In this example you could have used the deblocking and smoothing methods now available in the video object. Those methods are covered in the following sections.
Using Transparency within Video
Flash 8 and the On2 codec provide video that allows an alpha channel. 32-bit AVI and MOV have been able to do alpha channels for some time. Transparency in video gives you unparalleled creative opportunity in Flash. You can composite multiple videos, use rotoscoping type effects, as well as seamlessly use ActionScript to allow pixel changes in video to fire events in your ActionScript. Few other technologies allow you to do this with such ease of deployment and scripting.
Compositing clips works just like importing any other clip — you simply use movie clips or layers to determine which video is in front of which. You’ll be using the transparent video, which is available at the Beginning ActionScript 2 web site to create this example. However, if you have your own transparent video clips, go ahead and use them.
With ActionScript you’re mostly concerned with how to synchronize the time index of multiple movie clips. You’ll use setBufferTime in each clip to ensure that you have playable content in each clip and then coordinate the two clips so that they run simultaneously.
You can use the FLV CuePoints option to trigger changes in each clip if you want. However, in this example, you use the time index and a simple interval loop to track each.
510

Managing Video
If you have an existing video file with transparency, or you know After Effects or some other advanced video manipulation product, try this next example with your own content, make changes to be creative, and see what you can do with it.
A note about rotoscoping: Rotoscoping used to entail hours and hours of hand cutting objects from a film strip and overlaying them into a new strip of film. You probably can think of some old movies that used hand techniques to overlay objects onto new scenes. These techniques were replaced by blue and green screen systems. With Flash video you can use the Bitmap object to manually rotoscope objects by changing the alpha of specific pixel colors, but it is easier and sleeker to preprocess these pixels in After Effects or some other advanced product to create a 32-bit alpha channel clip, often called an RGBA clip.
Try It Out |
Compositing Video with Movie Clips |
1.Open a new FLA and save it as compositeFLVvideo.fla in your work folder.
2.Click the first frame in the timeline, open the Actions panel, and type the following ActionScript code:
var myNetConnect:NetConnection = new NetConnection(); myNetConnect.connect(null);
var myNetStream:NetStream = new NetStream(myNetConnect); myNetStream.setBufferTime(30); backgroundVideo.attachVideo(myNetStream); backgroundVideo.smoothing = true; myNetStream.play(“background.flv”); myNetStream.pause();
var myNetStream2:NetStream = new NetStream(myNetConnect); myNetStream2.setBufferTime(30); foregroundVideo.attachVideo(myNetStream2); foregroundVideo.smoothing = true; myNetStream2.play(“foreground.flv”); myNetStream2.pause();
myNetStream2.seek(3);
3.Open the Components panel and drag a new video object to the stage. Open the Properties panel and rename the video object backgroundVideo. Resize it to the desired size.
4.Add a new layer to the root timeline. Select the first frame in this new layer. Open the Components panel again and drag another video object to the stage so that it overlays the existing video object on the stage. In the Properties panel, rename this video object foregroundVideo. Resize the foreground video to suit.
5.Add the following function in the frame 1 ActionScript after the code you wrote in step 2 to check the buffer status of each video and kick off events to coordinate the video playback:
ok = true;
checkBuffer = function () { out.text = myNetStream.time;
if (myNetStream.bufferLength>=30 && myNetStream2.bufferLength>=30 && ok == true) {
myNetStream.pause(); ok = false;
}
if (myNetStream.time>=18) {
511

Chapter 20
myNetStream2.pause();
clearInterval(coordinator);
}
};
coordinator = setInterval(this, “checkBuffer”, 10);
6.Using the Text tool, add a new text field to the stage below your video clips. In the Properties panel rename the text field out. Use this text field to display the current state of your clip’s playhead position to see your code working.
7.Test your SWF. You will now see that after the background clip plays for 18 seconds, the foreground clip begins playing on cue. Try changing the second exception in the checkBuffer function to show 15 seconds rather than 18.
How It Works
In this example you saw a few netStream methods in action to help you coordinate two videos meant to play at specific times.
You’ll have to be sure the target video paths in the play methods point to a valid FLV video.
In the first section of code you added in step 2, you saw the same setup as you used in previous examples. However, the addition to note is the seek method used on the myNetStream2 object. Although the video is paused at start, you can seek to a specific time index. This will cause the object to call the specific bytes into the buffer to allow that specific section of video to play on demand.
You also paused both videos so they would not start playing on their own, regardless of bufferLength condition. This allows you to manually watch both streams and be sure that both buffers are ready to play the video you need.
The second part of the example set up an interval that checked the bufferLength of both videos. After both videos had 30 seconds of playable video ready to go, the background video was told to begin playing. Your interval then switched to begin checking for the playhead position of the background video.
Because you know that you checked that 30 seconds of the foreground video is now buffered, you can begin playing the video at any time you specify.
Because you chose to seek to 3 seconds into the foreground video, the slice of foreground video in the buffer is 3 through 33. When you call the pause method, the foreground clip begins to play because it was already paused at start. Remember that the pause method is a toggle. It will resume your video if the video is already paused.
You then clear the interval, and the two videos are now playing on conjunction with one another. If you feel that the connection will be slower and you would like to avoid having to rebuffer, you can change the 30 seconds to a greater number to preload more video into your objects before manipulating them.
If you plan on compositing larger clips, it’s always a good idea to keep tabs on the buffer and play condition of each clip. If the background clip, for example, pauses because its buffer ran low, you’ll want to have an event that recoordinates the videos to ensure that they stay in synch with one another.
In this example you also called the smoothing property of the background video object. This is because you scaled the clip to match the foreground clip size. Had you not allowed smoothing, the clip would have appeared blocky or low quality. By using smoothing you were able to interpolate the video and
512

Managing Video
create a somewhat better image. This was true for the foreground video. Because you shrunk the video in size, it too would have appeared blocky and low quality. By smoothing the video pixels you were able to keep the video at a respectable quality without re-encoding the videos. Video quality techniques are covered in the following sections.
Also, you didn’t need to use two videos. You could have used static JPG or PNG backgrounds behind your foreground video. You could have used vector graphics, the webcam object, or simply the rest of an application to act as a background. With an alpha channel, working with video in your interface has almost no limits.
Have fun!
Working with Video Quality
One of the advantages of using Flash 8 video and the On2 codec is the methods available to control quality of the video.
When video data is manipulated at runtime to affect its performance and quality, it is called post-processing. Often post-processing deals directly with dealing with pixilation and undesirable effects of over compression. This is exactly what Flash 8 allows. With Flash 8 you can scale your video object directly. This means rather than scaling the movie clip that contains the video, you can ask the codec, which is playing back the clip, to scale the original video file data. This leads to much better performance and maintenance of the original quality.
Of course when you begin to scale any graphics on a computer, unless they are vectors, the images become blocky and distorted as each individual pixel in an image grows much larger in size. ActionScript includes the deblocking method on the video object.
Deblocking and Deringing
Video compression works by subdividing each frame into blocks and subdividing these blocks into smaller blocks. Often the information for these blocks can come from keyframes or previous frames. This means some blocks will be close to what they should be, but not always accurate. Heavily compressing a clip can cause these blocks to be pronounced, because each block area needs to define the same amount of area with less original pixel data. Because the image is divided into these blocks, heavily compressed video will actually look blocky, hence the term. These blocks become more apparent when video is scaled larger. However, these blocks can also look odd when the video is made to play at smaller sizes than intended.
Many video formats are played back at only the size intended. Many of the compression techniques involve deblocking video during preprocessing. With Flash, you have much more control over how your video is presented by scaling, resizing, and deforming. The deblocking method gives you a set of tools for post-processing your video to improve these blocking issues on-the-fly. It is important to remember that these methods use CPU power to enhance the video. When you use these methods, you should be concerned about your target systems. Modern systems will have no issue deblocking video at runtime, but older systems might perform more slowly and cause your video to stutter.
513

Chapter 20
Deblocking video works similarly to smoothing. However, deblocking uses the specific deblocking method provided to the codec being used to playback video rather than a general smoothing method. By interpolating the edges of blocks, a smoothing effect occurs on the video image. Interpolation is a term for when a pixel is examined in the context of the pixels around it. A change is made to the pixel to a sort of average of the pixels around it.
With Flash 8 you also have deringing, which isn’t a specific method. Deringing is accessed via the deblocking method. Image ringing is composed of artifacts produced in video when objects move. When using block compression, objects that move across the screen can cause very active compression blocks, which attempt to compress and redefine an edge in each frame to the most efficient size. This can cause a ring or halo of a blurry rainbow-like effect on the edges of a person’s arm, or moving car. They are most often seen during high-contrast situations such as a bright person on a dark background. In these situations smoothing will probably solve the issue. However, smoothing also removes detail from other areas that are actually desired, such as details around the eyes of a subject, or text in the image. Using deringing in conjunction with deblocking can create the best possible post-processing situation for presenting the video.
When using high-bandwidth video, which contains a lot of data, the object will likely not need any of these methods. The effects of the methods are also not perfect. You should use these in situations in which the video is going to be scaled or zoomed for effect. You should always find a balance of bandwidth and quality that you approve of.
Different types of deblocking are used with each codec and different combinations of deblocking can be used on the video. The deblocking method allows for one parameter, which must be a Number. The available values match to specific post-processing conditions, as described in the following table:
Value Description
0Allows the video object to apply deblocking at a preset threshold determined by the codec.
1Stops deblocking completely.
2Uses the Sorenson deblocking filter.
3Uses the on2 deblocking filter with no deringer.
4Uses the On2 deblocking filter with the high-speed deringing filter.
5Uses the On2 deblocking filter with the high-quality deringing filter.
Scaling and Transparency
Flash 8 allows x-scale, y-scale, and alpha directly upon the video object. In Flash 6 and 7, it would have been necessary to perform these actions on a container movie clip. By calling these methods directly upon the video object, you can save instantiating a movie clip, and all of the CPU and memory that involves. If you must scale or use transparency on your video, it is recommended that you use the native video methods to perform these operations. The modifications will be performed on the actual video pixels, and when used in conjunction with deblocking can lead to much better video performance.
514

Managing Video
Working with a Camera
The Camera object, which was introduced in Flash 6, is mainly used with Flash Communication Server. You can use the Camera object in conjunction with Flash Communication Server to make chat programs and other applications requiring a camera. However, you can do some nifty things with the camera with just a little bit of simple ActionScript.
You can use the Camera object to detect motion activity over a specified motion level. You can also set up listeners to discern the amount of motion activity and act upon it.
In later examples you see a fun way to use the camera to enact changes in your application. You will, of course, need a camera that is functioning on your computer for the following examples.
One thing to remember when working with the camera is that the Flash Player will prompt the user to allow or deny your application access to the camera.
The Camera Class Methods, Properties, and Events
The Camera class methods are described in the following table:
Method |
Return |
Description |
|
|
|
get() |
Camera |
Static method that returns a reference to a camera on |
|
|
the system. |
setMode() |
Nothing |
Changes the properties of the camera to the specified |
|
|
parameters. If the parameters are unattainable, the |
|
|
camera is set to as close as possible. |
setMotionLevel() |
Nothing |
Sets the amount of activity required to fire the onAc- |
|
|
tivity event. The first parameter sets the amount of |
|
|
activity level required, and the second defines how |
|
|
many seconds the object should wait before declaring |
|
|
no activity. |
setQuality() |
Nothing |
Sets the bytes per frame requested from the camera. |
setLoopBack() |
Nothing |
Tells Flash Communication Server to use compression |
|
|
when intercepting the video feed. |
setKeyFrameInterval() |
Nothing |
Tells Flash Communication Server what interval to |
|
|
insert video keyframes into the compressed video feed. |
|
|
|
The get() method has one parameter, which is a Number. The default parameter is 0. get() is a static method that returns a reference to a camera on the system. Sending no parameter returns the first microphone detected. You can use the names array property to obtain a list and request a specific camera at an explicit array index. get() is almost always used with the attachVideo method upon a netStream or MovieClip object. Here’s the get() syntax:
var myCam_cam:Camera = Camera.get();
515

Chapter 20
The setMode() method has four parameters, which are as follows:
1.A Number that specifies the desired width in pixels of the video feed.
2.A Number that specifies the desired height in pixels of the video feed.
3.A Number that specifies the desired frames per second.
4.A Boolean that specifies whether the method should attempt the size settings regardless of whether the camera is capable of the specified FPS desired. In this case, the camera will provide the closest set of parameters possible, usually at the sacrifice of performance.
Here’s an example of the setMode() syntax:
var myCam_cam.setMode(320,240,24,true);
The following table describes properties of the Camera class:
Property |
Type |
Description |
|
|
|
activityLevel |
Number |
Number from 0 to 100 measuring the motion being detected. |
fps |
Number |
Maximum desired rate of frames per second. |
currentFps |
Number |
Number of frames currently being captured by the webcam. |
bandwidth |
Boolean |
Current amount of bandwidth specified by the setQuality |
|
|
method. |
motionLevel |
String |
Numeric value that must be reached to invoke the onActivity |
|
|
event. |
motionTimeOut |
Array |
Amount of time to wait before onActivity is called with a |
|
|
false parameter value, after no motion is detected. |
index |
Number |
Numeric array position of the current camera in the names |
|
|
array list of cameras. |
name |
Number |
String name of the current camera. |
names |
Boolean |
Numeric array that lists all available cameras on the system. |
height |
|
Height of the current feed in pixels. |
width |
|
Width of the current feed in pixels. |
|
|
|
The Camera class has two events:
onActivity — A Function that fires when the camera detects activity, and detects no motion.
onStatus — A Function that fires when the user allows or denies access to the camera.
Creating a Camera Object
You use a Video object to create a Camera object. The Video object is accessible via the library title bar menu. Expand that menu to see the option for adding a new Video object to the library. If you’ve been
516

Managing Video
reading the previous sections of this chapter you know that the same object is used to play back FLV files. Now, instead of specifying an FLV stream to send to the Video object, you’re specifying a Camera object.
To specify a Camera object, you don’t use a constructor. You use the get method of the Camera object to specify the camera you’d like to access. get() expects a numerical parameter that is the index of the camera desired in the names array of the Camera object. It is possible to have multiple cameras display on one stage, but each must have its own Video object on the stage, and each must have its own Camera object returned from the get() method.
Displaying a Camera Feed as Video on the Stage
When you use the Camera object, you do not need to use the netStream object, unless you plan on connecting to Flash Communication Server. Because the example does not use Flash Communication Server, you’ll be using just the Video object to display a live camera image on the stage. When you couple the Camera object to Flash wrappers such as Northcode, and others, you can use the Camera object to save images when activity is encountered, or at regular intervals. With this in mind, it is possible to create your own webcam management software with Flash and a Flash wrapper product that allows capture and ftp.
Because the Video object might be played within an MC, you can create a Bitmap object and manually modify or examine pixels returned by the Camera object.
The first step to diving into webcams is inserting one into your application. It is surprisingly easy. Unlike other technologies such as Java Applets, no external technologies such as QuickTime are required. Get a camera feed showing in an SWF right now so you can begin to manipulate and experiment with it.
Try It Out |
Show a Camera Feed on the Stage |
For this example you need a working webcam on your system. You can get a webcam relatively cheaply these days, and some digital cameras will work as webcams. After installing a webcam, test it with its native software to determine whether it’s working before attempting to import the feed into Flash.
1.Open a new FLA and save it as loadCamera.fla in your work folder.
2.Click the first frame in the timeline, open the Actions panel, and enter the following ActionScript code:
var theCamFeed:Camera = Camera.get(); var myCamera:Video; myCamera.attachVideo(theCamFeed);
3.Open the Library panel. Select the menu display icon in the Library panel title bar, and choose New Video. A Setup dialog opens; be sure to select ActionScript-Controlled. A video symbol will appear in the library. Drag an instance of the video symbol from the library to the stage.
4.In the Properties panel for the video symbol, now on the stage, change the name to myCamera. You can change the height and width of the object to suit.
5.Test your work. You should now see your video playing in the top-left corner of the stage.
517