
Beginning ActionScript 2.0 2006
.pdf
Chapter 19
rewindButton.onRelease = function() { mySound.start(); playButton.enabled = false; stopButton.enabled = true;
clearinterval(progressWatch);
progressWatch = setInterval(_root, “progress”, 10);
};
rewindButton.enabled = false;
24.Save the file as musicPlayer.fla.
25.Be sure the path and filename list in step 2 are correct. Test your SWF by pressing Ctrl+Enter. If you encounter a security alert, you’ll need to change your default setting to “always” when it prompts you so to view your settings.
26.Once you have verified step 25, you can test, and click a few of the UI buttons. Try changing the volume and adding different tracks. Allow a track to play and watch the next track load automatically. You can do what you want with the user interface look and feel, including adding a background.
How It Works
This was a lengthy example, but all in all, it was fairly small for a functioning MP3 player with quite a bit of interface controls.
In step 2 you described where the application could find the sound files. You also defined the names of the sound files in an array. An array is handy because you can flip through an array based on index number. This comes in handy later when you flip to the next track automatically.
In steps 3 and 4 you added a menu and populated it with the array of filenames. You also defined the event that loaded each file as they are selected by a user.
In step 5 you added a function that handles the automation of moving through your playlist. Once a song is done, you call this function, which stores what the current track index is, and adjusts the menu component to place a checkmark next to the currently playing track.
In steps 6, 7, and 8 you added a numeric stepper to your interface. Although not the most elegant solution to control this property, it takes up very little room, and it is easy to understand how the setVolume method works. As the value changes via user interaction, the volume of the sound object is changed to reflect the current desired value. This value is also queried when the sound initially loads so that it plays at the proper volume indicated in the user interface.
In step 9 you initialized your sound object and defined the onSoundComplete and onID3 events. The onID3 event changes the TextArea component to display the name of the artist and song, using the id3 property of the sound object. The onSoundComplete event defines how the interface should change. It defines which buttons should be enabled and which should not be enabled. The onSoundComplete event also calls the nextTrack method you defined in step 5. So you can see as soon as a song is done the interface begins loading the next file.
In steps 10 and 11 you created a loading progress bar that shows the user the percentage of bytes loaded of the selected file. This is handy because it can help to explain to users why a track isn’t playable yet, or that the download speed is fast enough to play without skipping. You use the sound object’s getBytesLoaded and getBytesTotal methods to indicate the current values.
488

Using ActionScript for Media
In step 12 you defined a loadSoundFile method. Although you could call the loadSound method directly, you need to change some user interface values as well as reinitialize the sound object so that the getBytesLoaded, duration, and position values return correctly. Without reinitializing them, the sound object will add the values to the existing values, rather than start from scratch.
In steps 13 and 14 you added a ProgressBar component to give the user interface an indication of the current position of the playhead. You use the position and duration properties of the sound object to calculate a percentage of time.
In steps 15 and 16 you added the TextArea component below the ProgressBar component. This is used to show the user information about the MP3 file being played.
In steps 17 and 18 you added a button to simply show and hide the Menu component you added in step 3. Pressing this button allows the user to select a track or view the name of the file currently playing.
The rest of the example adds the controls that change the state of the currently loaded sound. You have a play, stop, and rewind button. Each has a different default state when a track is and isn’t playing. Each change the states of the other buttons based on the current state of the sound object. You do this by utilizing the enabled method of the Button component. The loadSoundFile method utilizes and calls the onRelease method of the play button to make sure the interface is in the proper condition to play a file.
The play button’s onRelease action also starts the interval that changes the soundIndex values so that it reflects the proper position. Likewise, the stop button stops this loop and stops the sound playing. The rewind button can restart the loop, but starts the sound at the beginning of the track.
That’s basically it. Hopefully this showed you how fast and easy it is to create a sound control interface to any application. Obviously you didn’t need to use components, but you can see how much time and coding it saved you, and allowed you to concentrate on looking at and controlling the sound object.
Working with the Microphone
The microphone object was introduced in Flash 6. It is mostly used with Flash Communication Server to create chat programs and other applications requiring voice. However, you can do some nifty things with the microphone with just a little bit of simple ActionScript.
You can use the microphone to detect sound activity over a specified silence level. You can also set up listeners to discern the amount of activity and to act upon it. You can think of this as similar to the onActivity events you find in security cameras and webcams, except now for sound. There are fun ways to use the microphone to enact changes in your application, but of course, you need a microphone that is functioning on your computer to try them out.
One thing to remember when working with the microphone is that the Flash Player will prompt the user to allow or deny your application access to the microphone.
Microphone Class Methods
The following table describes the methods of the microphone class:
489

Chapter 19
Method |
Return |
Description |
|
|
|
get() |
Microphone |
Static method returns a reference to a |
|
|
microphone on the system. |
setGain() |
Nothing |
Sets the amount the microphone input |
|
|
should be boosted before being read into |
|
|
the microphone object. |
setRate() |
Nothing |
Sets the rate at which the microphone |
|
|
should capture sound. |
setSilenceLevel() |
Nothing |
Sets the threshold of activity on the micro- |
|
|
phone that should be considered sound, as |
|
|
well when the microphone should time |
|
|
out. |
setUseEchoSuppression() |
Nothing |
A method for reducing feedback and echo. |
|
|
Sometimes causes unwanted clipping of |
|
|
the microphone signal. |
The get() method has one parameter, a Number (the default parameter is 0). It is a static method that returns a reference to a microphone on the system. Sending no parameter returns the first microphone detected. You can use the names array property to obtain a list and request a specific microphone at a specific array index. get() is almost always used with the attachAudio method upon a netStream or movieClip object. Here’s its syntax:
var myMic_mic:Microphone = Microphone.get();
The setGain() method takes a single parameter, a Number from 0 to 100. The number sets the boost level of the microphone before it is sent to the microphone object. Here’s the syntax:
var myMic_mic.setGain(55);
Setting setGain to 0 mutes the user’s microphone, although the user can override this value. You can set a watch object to change it back. This is useful for conference applications where only one participant can be speaking.
Microphone Class Properties and Events
The following table describes the microphone class’s properties:
Property |
Type |
Description |
|
|
|
activityLevel |
Number |
A Number from 0 to 100 measuring the |
|
|
sound being detected. |
gain |
Number |
The amount by which the signal is being |
|
|
boosted before being sent to the micro- |
|
|
phone object. |
|
|
|
490

|
|
|
Using ActionScript for Media |
|
|
|
|
|
|
|
|
|
Property |
Type |
Description |
|
|
|
|
|
index |
Number |
A Number representing the index of the |
|
|
|
current microphone in the list of micro- |
|
|
|
phones available on the system. |
|
muted |
Boolean |
Returns true if the user has denied access |
|
|
|
to the microphone. |
|
name |
String |
The name of the current microphone. |
|
names |
Array |
An array of names of the microphones |
|
|
|
available on the system. |
|
rate |
Number |
The kHz rate at which the microphone is |
|
|
|
capturing sound. |
|
silenceLevel |
Number |
The activity threshold at which the micro- |
|
|
|
phone begins to accept input. |
|
silenceTimeOut |
Number |
The amount of time to wait before |
|
|
|
onActivity is called with a false |
|
|
|
parameter value after silence is detected. |
|
useEchoSuppression |
Boolean |
Returns true if echo suppression is |
|
|
|
enabled. |
The microphone class has two events, which are explained in the following table:
|
Event |
|
Type |
Description |
|
|
|
|
|
|
onActivity |
Function |
Fires when the microphone detects activity, |
|
|
|
|
|
and detects silence. |
|
onStatus |
|
Function |
Fires when the user allows or denies access |
|
|
|
|
to the microphone. |
|
|
|
||
Try It Out |
Creating a Microphone Object |
|
1.Open a new FLA and save it as microphone.fla in your work folder.
2.Click the first frame in the timeline, open the Actions panel, and type in the following ActionScript code:
this.createEmptyMovieClip(“microphone_mc”, this.getNextHighestDepth()); var myMic_mic:Microphone = Microphone.get(); microphone_mc.attachAudio(myMic_mic);
3.Test your code by pressing Ctrl+Enter.
491

Chapter 19
How It Works
In this example you have no visual indicator that the microphone is working, but if you have speakers on your computer, you should be able to hear sound being passed into the microphone. Unfortunately you can’t do much with the sound without Flash Communication Server except track activity, onStatus, onActivity, and other methods that can enable you to fire events based on activity level on the microphone.
In the next Try It Out, you expand on this one and also do something a bit fun with the activity level to see how you can use the microphone to affect an application.
Microphone Activity
One of the neatest features of the microphone object is the activity level. Using the activityLevel property enables you to make user interface changes based on sound input. You can, for instance, call functions based on particular microphone levels. For example, you might want to fire an event that shuts down a screen saver on a kiosk when the noise level reaches a particular value. By using the microphone as a user input method, you can add dimension and responsiveness to your applications in ways you might not have thought possible.
Trying It Out Measuring Activity, Listening
You use the FLA from the preceding Try It Out for this exercise. If you haven’t completed that, you should do so now.
1.Open the microphone.fla in your work folder.
2.Click the first frame in the timeline, open the Actions panel, and type in the following ActionScript code:
this.createEmptyMovieClip(“microphone_mc”, this.getNextHighestDepth()); var myMic_mic:Microphone = Microphone.get();
myMic_mic.setUseEchoSuppression(true); microphone_mc.attachAudio(myMic_mic);
3.Expand the User Interface section of the Components panel and drag an instance of the ProgressBar component onto the stage.
4.Select the ProgressBar component on the stage and, in the Properties panel, give the instance the name level. In the Parameters tab, change the label string to Mic Activity: %3%% or whatever you want, but you must include the %3% to display the numeric value.
5.Go back to the first frame actions in the Actions panel and add the following code:
level.mode = “manual”;
micLevel = function (callback) { level.setProgress(myMic_mic.activityLevel, 100);
};
this.listen = setInterval(this, “micLevel”, 10);
6.Test your code by pressing Ctrl+Enter. You should now see an indication of microphone activity in the ProgressBar component.
492

Using ActionScript for Media
7.Close the SWF and go back to the actions in frame 1. Now do something fun with the value. Change the micLevel function to look like this:
micLevel = function (callback) { level.setProgress(myMic_mic.activityLevel, 100); if (myMic_mic.activityLevel>=90&& spike == false) {
trace(“begin detect”); spike = true; spikeTime = getTimer();
}
if (spike == true) {
if (myMic_mic.activityLevel<65) {
var duration = getTimer()-spikeTime; if (duration<130) {
var doubleClap = spikeTime-firstClap;
if (doubleClap<600 && doubleClap > 300) { callback();
} else {
firstClap = spikeTime;
}
}
spike = false; spikeLength = 0;
}
}
};
8.You need to initialize some variables before starting the function, so add these declarations before the setInterval declaration:
var spike:Boolean = false; var spikeTime:Number = 0; var firstClap:Number = 0;
9.Now you need to define the clapper.onClap object and to modify the setInterval function and add the object and function as parameters:
var clapper:Object = {}; clapper.onClap= function() {
trace(“detected input”);
};
this.listen = setInterval(this, “micLevel”, 10, clapper.onClap);
10.Press Ctrl+Enter. With the SWF running, verify the microphone is working by looking at the ProgressBar component. Clap your hands twice, with just a slight pause between claps. You should see that two claps more often than not fires the onClap function.
How It Works
In this example you set up an interval to watch the activity level of the microphone via the activityLevel property. You also tracked activity to enact changes in the code behavior based on the value, and added the setUseEchoSuppression method to keep the microphone from echoing excessively.
493

Chapter 19
While you listened to activity on the microphone, you set up the micLevel function to look for spikes in activity. activityLevel reports values from 0 to 100. In the function you wait for the activity to reach 98 or higher. This means a loud input occurred on the microphone.
When you track a loud input you see how long it lasts by using the getTimer method. It returns the number of milliseconds that the SWF has been running. Once the activity of the microphone drops below 65, the function assumes that the loud noise has ended. If the length for which the microphone sustained a 98 or higher activity level is less than 130 milliseconds, you assume the microphone received a spike, or a very fast sharp sound has occurred. In this way you can determine with some degree of certainty that you should listen for a second spike. If two of these spikes are detected and the second occurs with 600 milliseconds of the first, you determine that a double spike has taken place. Because double spikes can happen with natural background noise, you also look for the spikes to be at least 300 ms apart. By narrowing the acceptable double spike to just 300 ms of variation, you can focus on a very spe- cific—like a manmade sound.
If your function decides it’s heard a double spike, it triggers the callback function specified in the parameter of setInterval.
This example is easy to fool and does make some mistakes, triggering the callback function by accident, or seemingly not responding sometimes. However, it shows you that with just rudimentary scripting you can begin to listen for specific microphone activity occurrences. You might think of more sophisticated ways of listening to the activity level of the microphone to determine purposeful audio input.
netStream
In the examples you attach the audio from the microphone to a movie clip. If you are using Flash Communication Server, you will want to attach the audio to a netStream object. For Flash to save, capture, or transmit microphone audio, you must use Flash Communication Server. You can, if you want, use netStream without Flash Communication Server by attaching the audio to a null netStream, but this doesn’t buy you anything in terms of capability. To do it requires a few more lines of ActionScript:
var aNetConnection_nc:NetConnection = new NetConnection(); aNetConnection_nc.connect(null);
var aStream_ns:NetStream = new NetStream(aNetConnection_nc);
var myMic_mic:Microphone = Microphone.get(); myMic_mic.setUseEchoSuppression(true); aStream_ns.attachAudio(myMic_mic);
Summar y
This chapter described different media types you can load into Flash. You were introduced to image loading, sound playback and manipulation, and microphone input. You loaded images and sound while maintaining a concern for bandwidth, preloading, and performance.
As you went through the chapter, you created some practical examples that used each object, and considered the many issues involved with each. In the microphone object example, you explored ways to take it to a next step by enhancing the user interface to accept alternate forms of user input. Even without Flash Communication Server, you discovered a way to use a microphone dynamically.
494

Using ActionScript for Media
Hopefully this chapter got you thinking of using Flash in new ways. The objects covered are a major part of Flash development, and are often the reason Flash is chosen over other platforms for a specific application. The capability to load and coordinate many different media types into a one seamless application is what differentiates Flash from other technologies.
Exercises
1.Use the MovieClipLoader class in a movie clip to create a rollover button. Utilize the PNG image files.
2.Using the sound object, load an MP3 file. Use the position value to fire events exactly when specific flourishes or events occur within the MP3 file. You’ll need an interval and the getTimer method to accomplish the solution.
3.Using the microphone object, fire an event that changes a message on the screen when only a small amount of activity lasts for more than one minute.
495


20
Managing Video
One of the most exciting technologies Flash provides is the capability to import and modify video on-the-fly. Flash 8 provides a whole set of new tools to work with and to enhance the video experience in Flash.
Flash 8 uses the On2 codec. Although Flash 8 can still play the Spark codec used in Flash 6 and 7, the new codec solves issues Flash 6 and 7 had on OSX, as well as provides a complement of capabilities and clarity previous Flash video did not have.
You can deliver your video as an FLV file (Flash video) or SWF file (Flash movie). Having the video defined as SWF limits you to transmitting data to just HTTP. This is often referred to as progressive video. The video is downloaded to the user’s cache and is played as frames arrive. With FLV, you can also transport the video over HTTP, but you can use Flash Communication Server to provide the file as a true streaming media. Packaging the video as SWF enables you to use normal movie clip controls to control the clip. Because movie clips are limited to 16,000 frames, so is your video. With an FLV file, you must use a specific set of controls designed for the netConnection object, regardless of whether the FLV file is being served from the Flash Communication Server.
This chapter explains Flash video and shows you how to best create and deploy video in your Flash applications.
Terms, Technology, and Quality
When dealing with Flash video, you will encounter some terms for properties that are helpful to understand so that you will be able deliver the best video content possible. Some of the terms you see here are often misused and substituted for one another. However, after you get a grasp of the concepts, you’ll understand why there is some ambiguity in the terminology.
Data Rate
This is the same as the data rate for MP3 files. The more data per second of video, the higher quality of image you get. A data rate that is too high will load slowly and can cause your video to pause and restart on its own as data becomes available. It is important to select a data rate that is