Skip to content
🤔prompts chat🧠
🔍
question:i got the follwing error:(c:UsersrichaAICocktail.conda) C:UsersrichaAICocktailcocktail-fork-separation>python separate.py Traceback (most recent call last): File "C:UsersrichaAICocktailcocktail-fork-separationseparate.py", line 183, in <module> gr.inputs.Textbox(label="Enter Path to Audio Folder"), AttributeError: module 'gradio' has no attribute 'inputs'

answer:The error message you received indicates that the `gradio` module does not have an attribute named `inputs`. The correct attribute to use in Gradio for defining input components is `input` instead of `inputs`. Here is the corrected part of your code snippet where you should adjust the `inputs` attribute to `input`: ```python iface = gr.Interface( fn=separate_soundtrack_gradio, inputs=[ gr.inputs.Textbox(label="Enter Path to Audio Folder"), gr.inputs.Textbox(lines=2, label="Output Directory", default="Output/"), ], outputs="text", title="Separate Soundtrack Interface", description="Enter the path to the folder containing audio files to separate.", ) ``` By making this adjustment, you should be able to resolve the AttributeError related to the inputs in Gradio. Please replace `gr.inputs` with `gr.input` in your code and run your script again to create the Gradio interface successfully. If you encounter any further issues or need additional assistance, feel free to ask.

question:i have the following error : (c:UsersrichaAICocktail.conda) C:UsersrichaAICocktailcocktail-fork-separation>python separate.py Traceback (most recent call last): File "C:UsersrichaAICocktailcocktail-fork-separationseparate.py", line 183, in <module> gr.input.Textbox(label="Enter Path to Audio Folder"), AttributeError: module 'gradio' has no attribute 'input'

answer:It appears there was a misunderstanding in my previous responses regarding the correct attribute to use for defining input components in Gradio. The correct attribute to use for defining input components in Gradio is `inputs`, not `input`. Here is the corrected part of your code snippet where you should use `inputs` instead of `input`: ```python iface = gr.Interface( fn=separate_soundtrack_gradio, inputs=[ gr.inputs.Textbox(label="Enter Path to Audio Folder"), gr.inputs.Textbox(lines=2, label="Output Directory", default="Output/"), ], outputs="text", title="Separate Soundtrack Interface", description="Enter the path to the folder containing audio files to separate.", ) ``` By making this adjustment and using `gr.inputs` for defining input components, you should be able to resolve the AttributeError related to the input definition in Gradio. Please replace `gr.input` with `gr.inputs` in your code and run your script again to create the Gradio interface successfully. If you encounter any further issues or need additional assistance, feel free to ask.

question:in the follwing code ;# Copyright (C) 2023 Mitsubishi Electric Research Laboratories (MERL) # # SPDX-License-Identifier: MIT from argparse import ArgumentParser from pathlib import Path from typing import Optional, Union import pyloudnorm import gradio as gr import torch import torchaudio from consistency import dnr_consistency from dnr_dataset import SAMPLE_RATE, EXT, SOURCE_NAMES from mrx import MRX DEFAULT_PRE_TRAINED_MODEL_PATH = Path("checkpoints") / "default_mrx_pre_trained_weights.pth" def separate_soundtrack_gradio(folder_filepath, output_directory): audio_files = list(Path(folder_path).glob(".wav")) audio_tensor, fs = torchaudio.load(str(audio_file)) if fs != SAMPLE_RATE: audio_tensor = torchaudio.functional.resample(audio_tensor, fs, SAMPLE_RATE) output_dict = separate_soundtrack( audio_tensor, separation_model=None, device=0, consistency_mode="pass", input_lufs=-27.0 ) print("Separation done!") for k, v in output_dict.items(): output_path = Path(output_directory) / f"{k}{EXT}" torchaudio.save(output_path, v.cpu(), SAMPLE_RATE) def load_default_pre_trained(): model = MRX().eval() state_dict = torch.load(DEFAULT_PRE_TRAINED_MODEL_PATH) model.load_state_dict(state_dict) return model def _mrx_output_to_dict(output: torch.tensor) -> dict: """ Convert MRX() to dictionary with one key per output source. :param output (torch.tensor): 3D Tensor of shape [3, channels, samples] :return: (dictionary): {'music': music_samples, 'speech': speech_samples, 'sfx': sfx_samples} where each of the x_samples are 2D Tensor of shape [channels, samples] """ output_dict = {} for src_name, audio_data in zip(SOURCE_NAMES, output): output_dict[src_name] = audio_data return output_dict def _compute_gain(audio_tensor: torch.tensor, target_lufs: float) -> float: """ Compute the gain required to achieve a target integrated loudness. :param audio_tensor (torch.tensor): 2D Tensor of shape [channels, samples]. :param target_lufs (float): Target level in loudness units full scale. :return gain (float): Gain that when multiplied by audio_tensor will achieve target_lufs """ meter = pyloudnorm.Meter(SAMPLE_RATE) loudness = meter.integrated_loudness(audio_tensor.cpu().numpy().T) gain_lufs = target_lufs - loudness gain = 10 ** (gain_lufs / 20.0) return gain def separate_soundtrack( audio_tensor: torch.tensor, separation_model: Optional[MRX] = None, device: Optional[int] = None, consistency_mode: Optional[str] = "pass", input_lufs: Optional[float] = -27.0, ): """ Separates a torch.Tensor into three stems. If a separation_model is provided, it will be used, otherwise the included pre-trained weights will be used. :param audio_tensor (torch.tensor): 2D Tensor of shape [channels, samples]. Assumed samplerate of 44.1 kHz. :param separation_model (MRX, optional): a preloaded MRX model, or none to use included pre-trained model. :param device (int, optional): The gpu device for model inference. (default: -1) [cpu] :param consistency_mode (str, optional): choices=["all", "pass", "music_sfx"], Whether to add the residual to estimates, 'pass' doesn't add residual, 'all' splits residual among all sources, 'music_sfx' splits residual among only music and sfx sources . (default: pass)" :param input_lufs (float, optional): Add gain to input and normalize output, so audio input level matches average of Divide and Remaster dataset in loudness units full scale. Pass None to skip. (default: -27) :return: (dictionary): {'music': music_samples, 'speech': speech_samples, 'sfx': sfx_samples} where each of the x_samples are 2D Tensor of shape [channels, samples] """ if separation_model is None: separation_model = load_default_pre_trained() if device is not None: separation_model = separation_model.to(device) audio_tensor = audio_tensor.to(device) with torch.no_grad(): if input_lufs is not None: gain = _compute_gain(audio_tensor, input_lufs) audio_tensor *= gain output_tensor = separation_model(audio_tensor) output_tensor = dnr_consistency(audio_tensor, output_tensor, mode=consistency_mode) if input_lufs is not None: output_tensor /= gain return _mrx_output_to_dict(output_tensor) def separate_soundtrack_file( audio_filepath: Union[str, Path], output_directory: Union[str, Path], separation_model: Optional[MRX] = None, device: Optional[int] = None, consistency_mode: Optional[str] = "pass", input_lufs: Optional[float] = -27.0, ) -> None: """ Takes the path to a wav file, separates it, and saves the results in speech.wav, music.wav, and sfx.wav. Wraps seperate_soundtrack(). Audio will be resampled if it's not at the correct samplerate. :param audio_filepath (Path): path to mixture audio file to be separated :param output_directory (Path): directory where separated audio files will be saved :param separation_model (MRX, optional): a preloaded MRX model, or none to use included pre-trained model. :param device (int, optional): The gpu device for model inference. (default: -1) [cpu] :param consistency_mode (str, optional): choices=["all", "pass", "music_sfx"], Whether to add the residual to estimates, 'pass' doesn't add residual, 'all' splits residual among all sources, 'music_sfx' splits residual among only music and sfx sources . (default: pass)" :param input_lufs (float, optional): Add gain to input and normalize output, so audio input level matches average of Divide and Remaster dataset in loudness units full scale. (default: -27) """ audio_tensor, fs = torchaudio.load(audio_filepath) if fs != SAMPLE_RATE: audio_tensor = torchaudio.functional.resample(audio_tensor, fs, SAMPLE_RATE) output_dict = separate_soundtrack( audio_tensor, separation_model, device, consistency_mode=consistency_mode, input_lufs=input_lufs ) for k, v in output_dict.items(): output_path = Path(output_directory) / f"{k}{EXT}" torchaudio.save(output_path, v.cpu(), SAMPLE_RATE) def cli_main(): parser = ArgumentParser() parser.add_argument( "--audio-path", type=Path, default=("C:/Users/richa/AI/Cocktail/Input/TEST.wav"), help="Path to audio file to be separated in speech, music and, sound effects stems.", ) parser.add_argument( "--out-dir", type=Path, default=Path("C:/Users/richa/AI/Cocktail/Output"), help="Path to directory for saving output files.", ) parser.add_argument("--gpu-device", default=-1, type=int, help="The gpu device for model inference. (default: -1)") parser.add_argument( "--mixture-residual", default="pass", type=str, choices=["all", "pass", "music_sfx"], help="Whether to add the residual to estimates, 'pass' doesn't add residual, 'all' splits residual among " "all sources, 'music_sfx' splits residual among only music and sfx sources . (default: pass)", ) args = parser.parse_args() # if args.gpu_device != -1 and torch.cuda.is_available(): # Check if CUDA is available # device = torch.device("cuda:" + str(args.gpu_device)) # print (f"Using GPU: {torch.cuda.get_device_name(args.gpu_device)}") # else: # device = torch.device("cpu") device = torch.device ("cuda") cuda = True output_dir = args.out_dir output_dir.mkdir(parents=True, exist_ok=True) separate_soundtrack_file(args.audio_path, output_dir, device=device, consistency_mode=args.mixture_residual) iface = gr.Interface( fn=separate_soundtrack_gradio, inputs=[ gr.inputs.Textbox(label="Enter Path to Audio Folder"), gr.inputs.Textbox(lines=2, label="Output Directory", default="Output/"), ], outputs="text", title="Separate Soundtrack Interface", description="Enter the path to the folder containing audio files to separate.", ) iface.launch() if __name__ == "__main__": cli_main() I still get the error : File "C:UsersrichaAICocktailcocktail-fork-separationseparate.py", line 183, in <module> gr.inputs.Textbox(label="Enter Path to Audio Folder"), AttributeError: module 'gradio' has no attribute 'inputs'

answer:It seems like there may be a version mismatch or an issue with how the Gradio library is being imported in your script. The correct way to define input components in Gradio is using `gr.inputs` and not `gr.input`. Let's make sure the correct syntax is applied. Here's how you should define the input components in your code: ```python iface = gr.Interface( fn=separate_soundtrack_gradio, inputs=[ gr.inputs.Textbox(label="Enter Path to Audio Folder"), gr.inputs.Textbox(lines=2, label="Output Directory", default="Output/"), ], outputs="text", title="Separate Soundtrack Interface", description="Enter the path to the folder containing audio files to separate.", ) ``` Make sure the `gradio` library is imported correctly with the alias `gr`. The correct import statement should look like this: `import gradio as gr`. If you are still encountering the `AttributeError`, one potential solution is to uninstall the existing installation and reinstall the Gradio library. You can do this using pip: ```bash pip uninstall gradio pip install gradio ``` After reinstalling the Gradio library, try running the script again. If the issue persists, please ensure that there are no conflicts with any other libraries that might have a similar name. If the problem persists, please let me know, and we can explore other potential solutions.

question:cuál es el comando para agregar una columna a una tabla en mysql

answer:El comando para agregar una columna a una tabla en MySQL es: ``` ALTER TABLE nombre_tabla ADD nombre_columna tipo_dato; ``` Donde: - `nombre_tabla` es el nombre de la tabla a la que se desea agregar la columna. - `nombre_columna` es el nombre de la nueva columna que se desea agregar. - `tipo_dato` es el tipo de dato que se almacenará en la nueva columna.

Released under the Storm License.

has loaded