在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):endoplasmic/google-assistant开源软件地址(OpenSource Url):https://github.com/endoplasmic/google-assistant开源编程语言(OpenSource Language):JavaScript 100.0%开源软件介绍(OpenSource Introduction):The Google Assistant SDKA version in node to play around with! I've abstracted it from needing to use the mic and speakers on the device running the code (but it still can!) so that you can pass audio in and play audio back however you want to. InstallationYou need to create a JSON file for OAuth2 permissions! Follow the instructions and then: $ npm install google-assistant
Usageconst path = require('path');
const GoogleAssistant = require('google-assistant');
const config = {
auth: {
keyFilePath: path.resolve(__dirname, 'YOUR_API_KEY_FILE_PATH.json'),
// where you want the tokens to be saved
// will create the directory if not already there
savedTokensPath: path.resolve(__dirname, 'tokens.json'),
// you can also pass an oauth2 client instead if you've handled
// auth in a different workflow. This trumps the other params.
oauth2Client: YOUR_CLIENT,
},
// this param is optional, but all options will be shown
conversation: {
audio: {
encodingIn: 'LINEAR16', // supported are LINEAR16 / FLAC (defaults to LINEAR16)
sampleRateIn: 16000, // supported rates are between 16000-24000 (defaults to 16000)
encodingOut: 'LINEAR16', // supported are LINEAR16 / MP3 / OPUS_IN_OGG (defaults to LINEAR16)
sampleRateOut: 24000, // supported are 16000 / 24000 (defaults to 24000)
},
lang: 'en-US', // language code for input/output (defaults to en-US)
deviceModelId: 'xxxxxxxx', // use if you've gone through the Device Registration process
deviceId: 'xxxxxx', // use if you've gone through the Device Registration process
deviceLocation: {
coordinates: { // set the latitude and longitude of the device
latitude: xxxxxx,
longitude: xxxxx,
},
},
textQuery: 'What time is it?', // if this is set, audio input is ignored
isNew: true, // set this to true if you want to force a new conversation and ignore the old state
screen: {
isOn: true, // set this to true if you want to output results to a screen
},
},
};
const assistant = new GoogleAssistant(config.auth);
// starts a new conversation with the assistant
const startConversation = (conversation) => {
// setup the conversation and send data to it
// for a full example, see `examples/mic-speaker.js`
conversation
.on('audio-data', (data) => {
// do stuff with the audio data from the server
// usually send it to some audio output / file
})
.on('end-of-utterance', () => {
// do stuff when done speaking to the assistant
// usually just stop your audio input
})
.on('transcription', (data) => {
// do stuff with the words you are saying to the assistant
})
.on('response', (text) => {
// do stuff with the text that the assistant said back
})
.on('volume-percent', (percent) => {
// do stuff with a volume percent change (range from 1-100)
})
.on('device-action', (action) => {
// if you've set this device up to handle actions, you'll get that here
})
.on('screen-data', (screen) => {
// if the screen.isOn flag was set to true, you'll get the format and data of the output
})
.on('ended', (error, continueConversation) => {
// once the conversation is ended, see if we need to follow up
if (error) console.log('Conversation Ended Error:', error);
else if (continueConversation) assistant.start();
else console.log('Conversation Complete');
})
.on('data', (data) => {
// raw data from the google assistant conversation
// useful for debugging or if something is not covered above
})
.on('error', (error) => {
// handle error messages
})
};
// will start a conversation and wait for audio data
// as soon as it's ready
assistant
.on('ready', () => assistant.start(config.conversation))
.on('started', startConversation); TypeScriptimport GoogleAssistant = require("google-assistant");
const googleAssistant: GoogleAssistant = new GoogleAssistant(); Examples
Pre-reqs for the mic-speaker exampleIf you are on macOS and are seeing $ npm install speaker --mpg123-backend=openal If you are on a Raspberry Pi and having some issues with getting the const mic = record.start({ threshold: 0, recordProgram: 'arecord', device: 'plughw:1,0' }); This is assuming you have a capture device on 1,0 (hint: type Assistant Instance {Object}Expects an object with the following params:
Eventsready {Assistant}Emitted once your OAuth2 credentials have been saved. It's safe to start a conversation now. Returns an instance of the assitant that you can start conversations with (after the ready event is fired though) started {Conversation}You'll get this right after a call to Methodsstart([callback]) {Conversation}This is called anytime after you've got a Conversation Instance [{Object}]After you call
EventserrorIf things go funky, this will be called. audio-out {Buffer}Contains an audio buffer to use to pipe to a file or speaker. end-of-utteranceEmitted once the server detects you are done speaking. transcription {Object}While you are speaking, you will get many of these messages. They contain the following params:
response {String}The response text from the assistant. volume-percent {Number}There was a request to change the volume. The range is from 1-100. device-action {Object}There was a request to complete an action. Check out the Device Registration page for more info on creating a device instance. ended {Error, Boolean}After a call to screen-data {Object}Contains information to render a visual version of the assistant's response.
Methodswrite(UInt8Array)When using audio input, this is what you use to send your audio chunks to the assistant. (see the mic-speaker example) end()Send this when you are finsished playing back the assistant's response. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论